Read Console input on spring boot with tomcat - java

Is it possible to read console input just before the embedded tomcat on the spring boot starts? The supposedly application flow is, request username and password from the user, and that will be used to be able to start the application. It works when using the java -jar command, the problem is when I close the console(SSH on linux) the process stops. I tried searching about it and found out that the process is tied to the console, so I tried using nohup, the problem is I cannot request for console input when using that. Is there any other way?

I think this can help you.
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);
// prompt for the user's name
System.out.print("Enter your name: ");
// get their input as a String
String username = scanner.next();
System.out.println(username);
SpringApplication.run(Application.class, args);
}

Get username and password in shell script before executing your java program and pass those as argument to your application.
#!/bin/bash
# Ask login details
read -p 'user: ' uservar
read -sp 'password: ' passvar
echo
Now you have user and password you can run java command with nohup and pass user password as jvm properties. You can also pass user and password as program arguments as suggested in other answer.
Like nohup java -jar abc.jar -Duser=$user -Dpassword=$password
And fetch these properties using
String user = System.getProperty("String");
String password = System.getProperty("password");

You can use nohup to start the jar with the parameters, they just won't be prompted for them on new lines in the terminal. The User could add them as parameters when starting the jar. See details below.
Example:
Main Class
public static void main(String[] args) {
String username = args[0];
String password = args[1];
if(username.equals("admin") && password.equals("password")) {
SpringApplication.run(NohupApplication.class, args);
} else {
System.out.println("You are not authorized to start this application.");
}
}
With Invalid Credentials
Command
nohup java -jar example.jar user password
nohup.out
You are not authorized to start this application.
With Valid Credentials
Command
nohup java -jar example.jar admin password
nohup.out
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.5.2.RELEASE)

If you want to full log with console input like logger + System.out.println() outputs, then you have to use
nohup java -jar yourJarFile.jar admin password >> fullLogOutput.log &
Another issue:
the problem is when I close the console(SSH on linux) the process
stops.
So you want to run your jar file as background process. You can't stop it with your console closing. Just add & symbol after full command, it will not stop. You can use the following command.
nohup java -jar yourJarFile.jar &
For taking full log with console output
nohup java -jar yourJarFile.jar >> fullLogOutput.log &
The & symbol is used to run the program in the background.
The nohup utility makes the command passed as an argument run in the background even after you log out.
Stopping/Killing the back-end process:
For stopping the background process, use ps -aux to get the id and then kill (id number)
ps -aux | grep yourJarFile.jar
You will get the id number. To kill that
sudo kill -9 <pid>
Resource Link: https://stackoverflow.com/a/12108646/2293534

Related

"-Dapp.pid=%%" passes in an incorrect pid to java jvm arguments in start script

In the start script for my application, the service is started with the following lines:
JVM_OPTS=$DEFAULT_JVM_OPTS" "$JAVA_OPTS" "$${optsEnvironmentVar}" -Dapp.pid=$$ -Dapp.home=$APP_HOME -Dbasedir=$APP_HOME"
exec nohup "$JAVACMD" -jar $JVM_OPTS <% if ( appNameSystemProperty ) { %>\"-D${appNameSystemProperty}=$APP_BASE_NAME\" <% } %> $CLASSPATH server /resources/config.yml > /home/testUser/stdout.out 2> /home/testUser/stderr.err &
The application starts up fine, but on code review, we noticed that the value of -Dapp.pid was incorrect, by checking it in ps -aux | grep appName, and comparing it to the PID of that command, along with the PID outputed by pgrep -f appName. I would like to know if there is any way to assign the correct PID to the parameter. So far, I've tried setting it to be:
-Dapp.pid=`preg -f appName`
But that simply ends up with -Dapp.pid being blank, which I assume is due to it calling that command before the exec is fully run. Has anyone else come across this before?

fabric-java-sdk: where to change the chaincode

Foreword
Hi, I am new to stackoverflow. If there is any place that is not clear, please point it out. Thank you!
Question
I just started to study hyperledger-fabric. As a Java programmer, I choose to use the fabric-java-sdk.
After I can run the test case End2endIT.java, I want to change the chaincode. I just find the example_cc.go at fabric-sdk-java/src/test/fixture/sdkintegration/gocc/sample1/src/github.com/example_cc/example_cc.go . However, after I changed the chaincode, it did't work. Even after I deleted this code, the test case can still run.
Therefore, I guess I found a wrong place. Can anyone tell me where to change the chaincode? Thx!
Additional
The code to load chaincode
if (isFooChain) {
// on foo chain install from directory.
////For GO language and serving just a single user, chaincodeSource is mostly likely the users GOPATH
installProposalRequest.setChaincodeSourceLocation(new File(TEST_FIXTURES_PATH + "/sdkintegration/gocc/sample1"));
//[output]: src/test/fixture/sdkintegration/gocc/sample1
System.out.println(TEST_FIXTURES_PATH + "/sdkintegration/gocc/sample1");
} else {
// On bar chain install from an input stream.
installProposalRequest.setChaincodeInputStream(Util.generateTarGzInputStream(
(Paths.get(TEST_FIXTURES_PATH, "/sdkintegration/gocc/sample1", "src", CHAIN_CODE_PATH).toFile()),
Paths.get("src", CHAIN_CODE_PATH).toString()));
}
I solved the question in the end as I noticed the fabric.sh in the fabric-sdk-java.
./fabric.sh up to force recreate the docker container
./fabric.sh clean to clean the peers
The reason why I could run the invoke request without chaincode is that I didn't clean the volumns of peers.
And the source code as follows:
#!/usr/bin/env bash
#
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
# simple batch script making it easier to cleanup and start a relatively fresh fabric env.
if [ ! -e "docker-compose.yaml" ];then
echo "docker-compose.yaml not found."
exit 8
fi
ORG_HYPERLEDGER_FABRIC_SDKTEST_VERSION=${ORG_HYPERLEDGER_FABRIC_SDKTEST_VERSION:-}
function clean(){
rm -rf /var/hyperledger/*
if [ -e "/tmp/HFCSampletest.properties" ];then
rm -f "/tmp/HFCSampletest.properties"
fi
lines=`docker ps -a | grep 'dev-peer' | wc -l`
if [ "$lines" -gt 0 ]; then
docker ps -a | grep 'dev-peer' | awk '{print $1}' | xargs docker rm -f
fi
lines=`docker images | grep 'dev-peer' | grep 'dev-peer' | wc -l`
if [ "$lines" -gt 0 ]; then
docker images | grep 'dev-peer' | awk '{print $1}' | xargs docker rmi -f
fi
}
function up(){
if [ "$ORG_HYPERLEDGER_FABRIC_SDKTEST_VERSION" == "1.0.0" ]; then
docker-compose up --force-recreate ca0 ca1 peer1.org1.example.com peer1.org2.example.com ccenv
else
docker-compose up --force-recreate
fi
}
function down(){
docker-compose down;
}
function stop (){
docker-compose stop;
}
function start (){
docker-compose start;
}
for opt in "$#"
do
case "$opt" in
up)
up
;;
down)
down
;;
stop)
stop
;;
start)
start
;;
clean)
clean
;;
restart)
down
clean
up
;;
*)
echo $"Usage: $0 {up|down|start|stop|clean|restart}"
exit 1
esac
done

Running Java Spark program on AWS EMR

I'm having problem running Java written spark application on AWS EMR.
Locally, everything runs fine. When I submit a job to EMR, I always get "Completed" withing 20 seconds even though job should take minutes. There is no output being produced, no log messages are being printed.
I'm still confused as weather it should be ran as Spark application or CUSTOM_JAR type.
Look of my main method:
public static void main(String[] args) throws Exception {
SparkSession spark = SparkSession
.builder()
.appName("RandomName")
.getOrCreate();
//process stuff
String from_path = args[0];
String to_path = args[1];
Dataset<String> dataInput = spark.read().json(from_path).toJSON();
JavaRDD<ResultingClass> map = dataInput.toJavaRDD().map(row -> convertData(row)); //provided function didn't include here
Dataset<Row> dataFrame = spark.createDataFrame(map, ResultingClass.class);
dataFrame
.repartition(1)
.write()
.mode(SaveMode.Append)
.partitionBy("year", "month", "day", "hour")
.parquet(to_path);
spark.stop();
}
I've tried these:
1)
aws emr add-steps --cluster-id j-XXXXXXXXX --steps \
Type=Spark,Name=MyApp,Args=[--deploy-mode,cluster,--master,yarn, \
--conf,spark.yarn.submit.waitAppCompletion=false, \
--class,com.my.class.with.main.Foo,s3://mybucket/script.jar, \
s3://partitioned-input-data/*/*/*/*/*.txt, \
s3://output-bucket/table-name], \
ActionOnFailure=CONTINUE --region us-west-2 --profile default
Completes in 15 sec without error, output result or logs I've added.
2)
aws emr add-steps --cluster-id j-XXXXXXXXX --steps \
Type=CUSTOM_JAR, \
Jar=s3://mybucket/script.jar, \
MainClass=com.my.class.with.main.Foo, \
Name=MyApp, \
Args=[--deploy-mode,cluster, \
--conf,spark.yarn.submit.waitAppCompletion=true, \
s3://partitioned-input-data/*/*/*/*/*.txt, \
s3://output-bucket/table-name], \
ActionOnFailure=CONTINUE \
--region us-west-2 --profile default
Reads parameters wrongly, sees --deploy-mode as first parameter and cluster as second instead of buckets
3)
aws emr add-steps --cluster-id j-XXXXXXXXX --steps \
Type=CUSTOM_JAR, \
Jar=s3://mybucket/script.jar, \
MainClass=com.my.class.with.main.Foo, \
Name=MyApp, \
Args=[s3://partitioned-input-data/*/*/*/*/*.txt, \
s3://output-bucket/table-name], \
ActionOnFailure=CONTINUE \
--region us-west-2 --profile default
I get this: Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.SparkSession
When I include all dependencies (which I do not need to locally)
I get: Exception in thread "main" org.apache.spark.SparkException: A master URL must be set in your configuration
I do not want to hardcode the "yarn" into the app.
I find AWS documentation very confusing as to what is the proper way to run this.
Update:
Running command directly on the server does work. So the problem must be in the way I'm defining a cli command.
spark-submit --class com.my.class.with.main.Foo \
s3://mybucket/script.jar \
"s3://partitioned-input-data/*/*/*/*/*.txt" \
"s3://output-bucket/table-name"
The 1) was working.
The step overview on the aws console said that the task was finished within 15 seconds, but in reality it was still running on the cluster. It took him an hour to do the work and I can see the result.
I do not know why the step is misreporting the result. I'm using emr-5.9.0 with Ganglia 3.7.2, Spark 2.2.0, Zeppelin 0.7.2.

object com.github.nscala_time.time.Imports not found error?

I used package com.github.nscala_time.time.Imports in code and I m running the code using spark.
Here is my stream.sh file:
#!/bin/bash
JARS_HOME=$HOME/spark-job/lib
JARS=$JARS_HOME/job-server-api_2.10-0.6.0.jar,$JARS_HOME/httpmime-4.4.1.jar,$JARS_HOME/noggit-0.6.jar,$JARS_HOME/nscala-time_2.10-2.0.0.jar
export SPARK_IP=`ifconfig | grep eth0 -1 | grep -i inet | awk '{ print $2 }' | cut -d':' -f2`
APP_JAR=$JARS_HOME/spark-jobs-tests.jar
export SPARK_LOCAL_IP=$SPARK_IP
dse spark-submit --conf "spark.cassandra.input.consistency.level=LOCAL_QUORUM" \
--total-executor-cores 2 \
--jars=$JARS \
--class "my file classpath" $APP_JAR "$1" --files $1
I have set $JARS_HOME/nscala-time_2.10-2.0.0.jar in .sh then still getting following error:
Exception in thread "main" scala.reflect.internal.MissingRequirementError: object com.github.nscala_time.time.Imports not found.
at scala.reflect.internal.MissingRequirementError$.signal(MissingRequirementError.scala:16)
at scala.reflect.internal.MissingRequirementError$.notFound(MissingRequirementError.scala:17)
at scala.reflect.internal.Mirrors$RootsBase.ensureModuleSymbol(Mirrors.scala:126)
at scala.reflect.internal.Mirrors$RootsBase.staticModule(Mirrors.scala:161)
at scala.reflect.internal.Mirrors$RootsBase.staticModule(Mirrors.scala:21)
How to resolve this??

How to run a Java application as Windows service using WinRun4J

I'm trying to run a Java application as a Windows service with WinRun4J.
I copied WinRun4J64c.exe in my application directory and placed the following service.ini file beside:
service.class=org.boris.winrun4j.MainService
service.id=MyAPP
service.name=MyAPP
service.description=some description
classpath.1=./lib/*
classpath.2=WinRun4J.jar
[MainService]
class=play.core.server.NettyServer
But if I start the service with: WinRun4J64c.exe --WinRun4J:RegisterService I get:
Service control dispatcher error: 1063
What is wrong?
I didn't get it working, so my workaround is to use Apache Commons Deamon. I executed the included prunsrv.exe with the following parameters:
prunsrv.exe install "MeineAnwendung" \
--Install="C:/pfad/zu/prunsrv.exe" \
--JvmOptions=-Dpidfile.path=NUL
--Jvm=auto \
--Startup=auto \
--StartMode=jvm \
--Classpath="c:/irgendwo/anwendung/lib/*;" \
--StartClass=play.core.server.NettyServer

Categories

Resources