So I'm deploying my Spring Boot application on an Ubuntu LTS Server. It is built with maven and running with embedded Tomcat.
I'm still new to the deployment process, what I did was:
Log into server via ssh
use scp to upload my_application.zip
unzip it in ssh
java -jar my_application.jar
Now all of that works perfectly fine and I've been using it like that for quiet some time. Now I have to make the Application to stay online and available after logging out of the shell.
I have read some documentation about running processes in background on Linux and I've tried it with nohup java -jar myApplication.jar &, with the screen command and with bg. All of them worked fine while I'm logged into the ssh.
Here comes my problem:
As soon as I end the ssh session the Web App is still available (so the process clearly didn't stop) but it just looks & behaves really weird.
CSS is not applied, JS does not work etc.
My guess would be that some paths or file system accesses are messed up, but I have no idea at all how that could origin from the ssh session.
(When I log back into ssh everything is working fine again)
Would be great if someone has a clue here
If your server has encrypted home directory, it will get re-encrypted once you log out and therefore your script will stop working. It does not have a lot of sense to have encrypted homes on servers so you can disable it.
Or just run the script from different directory and avoid working with files under home directory.
I think you should use systemd for this case.
Also You can add new system user for your app.
You can find more information here:
Spring Boot: 59.2.2 Installation as a systemd service
Ubuntu Wiki: Systemd For UpstartUsers
For example:
Create file myunit.service
[Unit]
Description=MySpringService
After=syslog.target
After=network.target
After=mysql.service
[Service]
Type=forking
PIDFile=/work/www/myunit/shared/tmp/pids/service.pid
WorkingDirectory=/work/www/myunit/current
User=myunit
Group=myunit
Environment=RACK_ENV=production
OOMScoreAdjust=-1000
ExecStart=/usr/local/bin/bundle exec service -C /work/www/myunit/shared/config/service.rb --daemon
ExecStop=/usr/local/bin/bundle exec service -S /work/www/myunit/shared/tmp/pids/service.state stop
ExecReload=/usr/local/bin/bundle exec service -S /work/www/myunit/shared/tmp/pids/service.state restart
TimeoutSec=300
[Install]
WantedBy=multi-user.target
Copy file to /etc/systemd/system/
Run:
systemctl enable myunit
systemctl start myunit
Related
I ssh into the ec2 instance using command prompt. I then launch the jar file from there. The web app runs perfectly from any device until the command prompt is closed. The site immediately goes down. The instance is shown as “running” when the site is down. ideas?
Well, ec2 is the virtual machine and it will show as running because you didn't shutdown or terminate it.
Your webapp is down because closing the command prompt will quit the shell session and thus terminating/killing the running jar.
It seems you are not running the jar as a background process.
If you are using Linux EC2 instance then try running your jar as
$java -jar jarfilename.jar &
The & makes your java process as a background job.
Note down the process id and then close the session. Now your webapp will keep on running as long as your ec2 instance is running.
I'd suggest reading about nohup and background processes in Linux in general.
I'm learning to deploy Spring Boot apps on AWS EC2. And I know how to automate app launch, when I start the EC2 instance, I don't need to manually use the command java -jar java-service.jar, I just add this command in the /etc/rc.local file and that is all. But I have 2 microservice, and I want to start both of them automatically, but if I try to add both commands in the /etc/rc.local it's not working, only the first service will start, the second service will not start.
So I have the commands added like this:
And after I start the EC2 instance only the first service is started:
Thank you!
I am not a unix expert, but I see the only issue in running 2 java commands from terminal is that unless the first command returns, the next command is not executed. So, I think the solution would be run the 1st command in some interactive mode so that the other commands can be executed simultaneously.
There are ways in unix shell to run a command in background. I found this useful link - https://www.maketecheasier.com/run-bash-commands-background-linux/
In bash terminal, a command can be made to run in background by appending it with &. So, I think you should be able to start both jars if you do something like -
java -jar /home/ec2-user/first.jar &
java -jar /home/ec2-user/second.jar
I recommend to use Systemd.
Create a Systemd unit file for every microservice, save it in /etc/systemd/system/my-app.service. Something like that:
[Unit]
Description=My Java app
After=syslog.target network.target
[Service]
EnvironmentFile=/etc/sysconfig/my-app-env
WorkingDirectory=/my/app/home
ExecStart=/usr/bin/java $JAVA_OPTS -jar my-app.jar
KillMode=process
User=my-app-user
Restart=on-failure
[Install]
WantedBy=multi-user.target
Then, run:
systemctl daemon-reload
systemctl enable --now my-app
After that, you can use:
systemctl status my-app
systemctl stop my-app
systemctl start my-app
Another solution is to bundle your jars into Docker images. This of course requires Docker runtime and adds an overhead, but it also has some benefits:
Complete separation of jar files. Easily use different java versions.
No need to worry about differences of local and ec2 environment.
Easily scale to 3 or more jars.
Use Docker Cli to build and start containers. Works great in a Devops Pipeline.
You can read here to learn how to create Spring Boot Docker images. After you build an image. You start it like this.:
docker run -p 8080:8080 springio/gs-spring-boot-docker
You can run as many docker run commands you need, one after another.
I am not sure which system you are using in starting application:
For linux base system, you can use crontab to schedule the task when the server reboot.
Follow this steps:
Download crontab
#apt-get install cron
Edit the file file to enable the task
crontab -e
(Choose Vim or nano to edit the task)
Add this code to your server
#reboot /usr/bin/java -jar XXXXX.jar
Save your file
Check the result
crontab -l
#systemctl status cron
This method works in my Debian system. For more details, you can refer to
How to automatically run program on Linux startup
If you are running from bash, then join two jar commands with "&" like below.
java -jar /home/ec2-user/first.jar&java -jar /home/ec2-user/second.jar
coupon service
Run the command 'java -jar /home/ec2-user/coupon-service-0.0.1-SNAPSHOT.JAR'
Press CTRL+Z, type bg, press Enter, type disown, press Enter.
product service
Run the command 'java -jar /home/ec2-user/product-service-0.0.1-SNAPSHOT.JAR'
Press CTRL+Z, type bg, press Enter, type disown, press Enter.
NOTE: Both services should have different ports.
I am new to Java. I have hapi fhir server running on AWS by cloning this repository (https://github.com/hapifhir/hapi-fhir-jpaserver-starter)
I run my server with follwing command: "sudo mvn -e jetty:run"
--
My Problem:
As soon as I log out of AWS, my server stops. When I am logged in to my AWS instance via the .pem file, AWS instance running with ubuntu 18.04 LTS with nginx server.
Thanks
The ideal approach to execute or setup a java application on AWS is to run it as a daemon by setting up systemd script or init in linux.
In your case the application stops as soon as you close the terminal, because you are starting it in the terminal without the nohup command, when the terminal is closed the application is also stopped since the controlling thread is stopped. If you just want to launch the application on a separate background thread without going through the hassle of actually setting it up as a service in linux , you can use the nohup command (setting up a systemd to register the java application as a service is the preferred approach) :
nohup java -jar yourjarName &
run it as daemon:
"sudo mvn -e jetty:run &"
The & makes the command run in the background.
From man bash:
If a command is terminated by the control operator &, the shell
executes the command in the background in a subshell. The shell does
not wait for the command to finish, and the return status is 0.
I am new to containerizing apps using Docker. I could deploy a container including a war file. The war is basically a JAVA web application/servlet that sends back a video file upon receiving the request from end-user. The app deployment using docker was a success and app works fine. However I have some issues regarding its boot time.
From the moment that I create the container by hitting command docker run -it -d -p 8080:8080 surrogate, it takes about 5-6 minutes for the container to become operational, meaning that the first 5-6 minute of the container lifetime, it is not responding to end-user requests, and after that it works fine. Is there any way to accelerate this boot time?
Dockerfile includes:
FROM tomcat:7.0.72-jre7
ADD surrogate-V1.war /usr/local/tomcat/webapps/
CMD ["catalina.sh", "run"]
WORKDIR "/usr/local/tomcat/"
RUN wget https://www.dropbox.com/s/1r8awydc05ra8wh/a.mp4?dl=1
RUN cp a.mp4\?dl\=1 lego.mp4
(Posted on behalf of the OP).
First get rid of -d in the "docker run" command to see what is going on in the background. I noticed the war deployment phase is taking so long (around 15-20 minutes!!!)
The reason in my case was that the tomcat version in the Dockerfile was different from the Tomcat version in the environment from which I exported the web application as WAR. (How to check JRE version: in terminal enter: JAVA -version, checking the Tomcat version: from eclipse, when you are exporting, it shows the version).
In my case in Dockerfile, I had :
FROM tomcat:7.0.72-jre7
I changed it to:
FROM tomcat:6.0-jre7
It now takes less than 10 seconds!
In a nutshell, make sure that the Tomcat version and JRE versions in the Dockerfile are the same as the environment from which you exported the Java web application as WAR.
I've made my first clojure app and I've built it with leiningen by running lein ring uberjar. I can run this on my dev machine (fedora 23) and I can run this on my ubuntu production machine java -jar tomahawk.jar. I've followed the digital ocean tutorial here that describes how to set up the environment. My supervisor conf is the following:
[program:tomahawk]
environment=TOMAHAWK_DB=$TOMAHAWK_DB
command=/usr/lib/jvm/java-1.7.0-openjdk-amd64/bin/java -jar tomahawk.jar
logfile=/tmp/supervisord.log
directory=/var/www/tomahawk/app
user=www-data
autostart=true
autorestart=true
startretries=3
redirect_stderr=true
stdout_logfile=/var/www/logs/tomahawk.app.log
However, when using this through nginx, i get the error
2016-03-19 23:34:08.594:WARN:oejs.AbstractHttpConnection:/
java.sql.SQLException: No suitable driver found for jdbc:sqlserver:$
at java.sql.DriverManager.getConnection(DriverManager.java:596)
at java.sql.DriverManager.getConnection(DriverManager.java:187)
at clojure.java.jdbc$get_connection.invoke(jdbc.clj:255)
This comes from the com.microsoft.sqldbc4.jar file. However, I can run this standalone jar file, as well as I can run sudo -u www-data env TOMAHAWK_DB=$TOMAHAWK_DB java -jar tomahawk.jar just fine. Also, while this is running, I can hit it through the nginx reverse proxy.
I've searched around and found an unanswered email on the supervisor mailing list from 2011. Why can I only sometimes see the missing jar file?
I've tried putting it inside of the /usr/lib/jvm/.../lib/ext location for my jvm, bundled inside of the jar file, running from Tomcat, etc. I've made an upstart script as well with identical results as well. I am not sure what other paths I can try and was wondering if anyone had any insights.