I am trying to implement LetsEncrypt on Spring Boot app. I am using docker to deploy, I am creating a docker image locally, pushing it to docker hub and then running it in Ubuntu using this docker command docker run -d -p 80:80 myapp:latest and this is just http so now I am trying to use LetsEncrypt but I don’t know how to do it. Any help or any direction towards some links would be highly appreciated. Thanks
Architecture: need for a reverse proxy + a container orchestration tool
If your Spring-Boot container serves HTTP-only requests, you can link it with a TLS termination proxy, which will accept incoming TLS connections and forward requests to your container.
Many implementations of reverse proxys are available, which can play this role of TLS termination proxy (see this paragraph on Wikipedia), using e.g. Let's Encrypt as you suggest.
Most of these implementations are also available as Docker images, so you may want to rely on a container orchestration tool such as docker-compose along with a docker-compose.yml to create a private network for the two containers to communicate (or, use a more involved orchestration solution such as Kubernetes).
Overview of several Docker implementations
To give a few examples of Docker images that implement this, you could for instance use one of these popular reverse proxys (the first two being mentioned in Gitea's doc):
NGINX (also bundled in projects like https-portal to automate the certificate generation),
Apache2 httpd,
Træfik, which additionally provides a "monitoring dashboard" as a webapp (see also the official doc that gives many details on automatic certificate generation)
If you want to keep things simple (keeping 1 container only with the Spring-Boot app, without a sidecar), you can use the library LetsEncrypt Spring-Boot helper. It allows embedded Tomcat on Spring-Boot to obtain LetsEncrypt certificate and renew it automatically.
As the key advantage of it - it is native Java + Spring-Boot, so no sidecars are necessary, just embed the library, expose ports 80 and 443 and everything else will be done automatically.
The reason I've created it is that all those sidecars just add unnecessary configuration complexity to the application. For simple pet-projects such added complexity is not justified. Embedded Tomcat from Spring-Boot is more than enough for these kind of projects
Related
Background Context:
Due to enterprise limitations, an uncooperative 3rd party vendor, and a lack of internal tools, this approach has been deemed most desirable. I am fully aware that there are easier ways to do this, but that decision is a couple of pay grades away from my hands, and I'm not about to fund new development efforts out of my own pocket.
Problem:
We need to send an internal file to an external vendor. The team responsible for these types of files only transfers with SFTP, while our vendor only accepts files via REST API calls. The idea we came up with (considering the above constraints) was to use our OpenShift environment to host a "middle-man" SFTP server (running from a jar file) that will hit the vendor's API after our team sends it the file.
I have learned that if we want to get SFTP to work with OpenShift we need to set up of our cluster and pods with an ingress/external IP. This looks promising, but due to enterprise bureaucracy, I'm waiting for the OpenShift admins to make the required changes before I can see if this works, and I'm running out of time.
Questions:
Is this approach even possible with the technologies involved? Am I on the right track?
Are there other configuration options I should be using instead of what I explained above?
Are there any clever ways in which an SFTP client can send a file via HTTP request? So instead of running an embedded SFTP server, we could just set up a web service instead (this is what our infrastructure supports and prefers).
References:
https://docs.openshift.com/container-platform/4.5/networking/configuring_ingress_cluster_traffic/configuring-externalip.html
https://docs.openshift.com/container-platform/4.5/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-service-external-ip.html#configuring-ingress-cluster-traffic-service-external-ip
That's totally possible, I have done it in the past as well with OpenShift 3.10. The approach to use externalIPs is the right way.
I use nginx reverse proxy with docker and want to automate my nginx configuration.
For example, I want to tell my java app a domain/server_name (e.g. myapp.example.com) and a backend system. And my java app should tell nginx to configure that.
Is this possible or is there an alternative reverse proxy software with that functionality.
One way to achieve that would be to use a shared volume that both containers (the java container and the nginx container) can access and where the nginx configuration file is placed. This would also work if java is not in docker then it just needs access to the mapped folder.
Whenever you want to update the config just rewrite it and then trigger nginx to reload. There are mutliple ways to achieve this. Most easily by using
docker exec nginx-container-name nginx -s reload
e.g. via Java ProcessBuilder or the awesome Java Docker Project https://github.com/docker-java/docker-java.
Note: If you run java inside a docker container you have to map the docker socket inside the container (e.g. using -v /var/run/docker.sock:/var/run/docker.sock from shell).
I have no idea if it is possible but I heard that the best practice is to create a frontend project and a backend project as two independence projects. To do that, I should use Nginx, right? But how exactly do that and how exactly is it works?
I just create an angular2 project with node.js and start the server to listen to for example 80 port.
Then I create a java project with jetty and start the server to listen to for example 90 port.
Then, should I in some way create Nginx project to merge frontend and backend? I need help cause I'm afraid I can't understant how to do that.
It looks like you're mixing up a few things here:
In general it would make sense that your project is either written in JavaScript, and is running in Node.js, or written in Java (or a JVM language) as a Servlet, in which case it will run inside a Servlet Container like Jetty or Tomcat.
A web server like nginx or Apache httpd can be placed in front of the backend service in order to handle static content, provide caching, security, load balancing etc.
I have just finished developing a REST web service that is consumed by a mobile application. The web service is developed with Java and runs on an Apache server.
I'm now moving to the testing part. And for that purpose, I need to host my web service in a real server.
It is a first experience for, and I just knew that using mutualised (shared) hosting does not allow me to host whatever application, in whatever language.
The one I get to use is OVH, which does not support java web services hosting.
Does anybody have any other alternative to provide. It would help a lot!!
Like I said, if it is for testing purposes you could always use a "normal" PC, running something like XAMPP.
As an alternative you could give RedHat's OpenShift a try, which offers a free, getting-started plan (more info here) that should more than cover your testing requirements.
To run your app (in Eclipse) you would need to : Run As -> Run on Server
And then select a server. If you haven't done so; I suggest you install a local JBoss/WildFly server (the wizard can take care of that for you).
Doing this will display options to run your app either on the local or the OpenShift/rhcloud server.This makes testing faster and allow you to avoid testing on the OpenShift live server.
I have WAMP installed on my local machine and am looking to serve up charts using jFree's Eastwood charting, which requires me to use servlets. So basically I will insert images with src tags that have URLs pointing to my servlet on the same machine.
What's the easiest way to enable servlets on the same machine? Do I need to install a servlet server on a different port? Or is there a way to integrate it into WAMP?
You have a couple of choices.
The easiest is to get a Web container such as Tomcat or Jetty and run it on a different port (by default it's usually 8080).
A Web container can be integrated into Apache and this tends to be what happens in production sites. See Tomcat-Apache HOWTO or Apache 2 with Tomcat 6: How to Configure. It's probably overkill for a local install.