Kubernetes: Fetch system IP:PORT on which our pod is running - java

We have a setup where a single pod is running inside a node in our machine.
In normal Java application when we use the java.net.InetAddress.getLocalHost().getHostAddress(), it would give the system hostAddress where the server is running. But if the same API is called in a java application running inside a Kubernetes pod it would give the hostAddress of pod. Please let me know how to get the hostAddress of the system where the kubernetes pod is running and not the pod hostAddress?
Edit: Do not want to add Kubernetes dependencies, so please suggest how to achieve it in Java without kubernetes APIs.

Related

Java fabric8 kubernetes client to get the name of the deployed kubernetes pod's cluster

I want to use fabric8 kubernetes client (java) inside a pod to obtain the cluster it is deployed on.
Currently, I can obtain the context using:
KubernetesClient client = new DefaultKubernetesClient();
return client.getConfiguration();
When running locally, I can see the current kubernetes context. However, when it is run on the deployed pod, I am unable to see the kubernetes context.
Why is it not working on the deployed pod? How can I detect the kubernetes cluster the pod is deployed on?

Dockerized tomcat webapp throwing 404 exception when connecting to another localhost URL

I've got an application running in tomcat 8.5 in a docker container. Java version is jdk 8. There is a properties file that the app uses that points to another running service. In this case, I have the service running on my host machine and the property is set to point to my localhost:
my.external.service.url=http://localhost:8080/my-api-service
When my tomcat app is also running on the host machine, this works fine. But when my app runs in the docker container, I get a 404 error when it tries to call this service.
I tried switching the URL to point to my machine name:
my.external.service.url=http://my.pc.url.com:8080/my-api-service
But in this case, even though it still works if the tomcat app is running on the host instead of the docker container, I get a different error:
java.net.UnknownHostException: my.pc.url.com: No address associated with hostname
How do I configure the container so that I can get this to work?
Looking at the specific error you got - No address associated with the hostname - that means it cannot resolve the hostname to an IP. So you can find you local host IP and then pass an argument to the docker run command to add that host to the container lookup. Eg: -add-host="my.pc.url.com:X.X.X.X"
Docker networking is a potentially complex thing. Basically, if you run two separate containers they will not 'see' each other. You could have them both expose ports on the host machine and have then 'connect' that way, but you are better using the networking features of docker to ensure the containers talk to each other.
For a simple example like you have - say just two containers running tomcats that you would like to talk to each other - you might be best to just run them both via docker-compose so that they are in the same network.
There are many resources on the internet to explain the further details of docker networking if you wish to explore.

How can I access a service installed on Kubernetes from anywhere?

I am working on a mac machine and installed the latest Kubernetes and followed the example here (this is for dev’t purpose). All went smooth but I was hoping that Kubernetes provide me an ip address and port number where my service will be listening to so that I can access it from anywhere.
Please correct me if I am wrong.
I was able to run ifconfig as well as curl $(minikube service hello-minikube --url) and I was able to see the ip address and port but I wasn’t able to access it outside command line where Kubernetes lives in.
The reason I am trying to access it outside the VM is because we have other projects that run on other machines and I wanted to call the REST service I installed while we are on dev env. This way we don’t have to wait until the service is pushed to production.
FYI: This is my first micro service project and I would appericiate your feedback.
I followed the steps in the article you linked and it works as expected.
Just do:
minikube service hello-minikube --url
You will get a url like http://192.168.99.100:32382/ - the port and IP could and will change for you. Also note that the exposed port will be a random port like the 32382 and not 8080 that the pod uses.
Use the url in your browser, say and you should be able to see the output of the service.

Configurate Spark by given Cluster

I have to send some applications in python to a Apache Spark cluster. There is given a Clustermanager and some worker nodes with the addresses to send the Application to.
My question is, how to setup and to configure Spark on my local computer to send those requests with the data to be worked out to the cluster?
I am working on Ubuntu 16.xx and already installed java and scala. I have searched the inet but the most find is how to build the cluster or some old advices how to do it, which are out of date.
i assume you remote cluster is running and you are able to submit jobs on it from remote server itself. what you need is ssh tuneling. Keep in mind that it does not work with aws.
ssh -f user#personal-server.com -L 2000:personal-server.com:7077 -N
read more here: http://www.revsys.com/writings/quicktips/ssh-tunnel.html
your question is unclear. If the data are on your local machine, you should first copy your data to the cluster on HDFS filesystem. Spark can works in 3 modes with YARN (are u using YARN or MESOS ?): cluster, client and standalone. What you are looking for is client-mode or cluster mode. But if you want to start the application from your local machine, use client-mode. If you have an SSH access, you are free to use both.
The simplest way is to copy your code directly on the cluster if it is properly configured then start the application with the ./spark-submit script, providing the class to use as an argument. It works with python script and java/scala classes (I only use python so I don't really know)

Run Jetty Website on Azure Virtual Machine

I have created a Java Web Application using Jetty (in Eclipse, using OSGI etc.). The application itself runs quite well (when being tested locally), so I wanted to run it on an Azure virtual machine in order to be accessible for external users (for testing reasons).
What I did so far:
created an Azure account
create a virtual machine with Windows Server running in it
downloaded all my eclipse files etc. to the virtual machine
started the application (in fact in eclipse, not the compiled jar) in the virtual machine; the application is published to port 8080
so, when i run a webbrowser in the VM and connect to localhost:8080, everything works well
but when I try to access the website from external (using my assigned domain of the VM, something.cloudapp.net:8080), it does not work
I also created endpoints in the azure management console for this VM (80, 8080, etc.)
Does anyone ever tried to run a java webapp on Azure or has a hint what could go wrong here?
By default, windows servers in Azure have the windows firewall enabled. This would block external connections to port 8080 by default. Try adding an appropriate exception to the windows firewall rules.
According to your description, I think you have correctly configured the new endpoints for the network traffic of Java Webapp. If not or incorrectly does, please refer to the article https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-create-nsg-arm-pportal/ to configure again.
Then, as #CtrlDot said, you need to configure the firewall for allowing the inbound traffic on Windows Server.
As reference, please see the article about allowing inbound traffic to a specified TCP or UDP port on Windows Server to do it.

Categories

Resources