How does config file in lightbend/config works? - java

I'm having trouble with the substitution using lightbend config library .
I have an application.conf file with this content:
property.a = "propA"
list =
[
{
nameProp=one,
propToReplace = ${property.a}
},
{
nameProp=two,
propToReplace = ${property.a}
}
]
some.env {
property.a = "propEnvironment"
}
At some point in the code, I'm loading the property file using Configuration.load().
My goal is to subtitute the propToReplace with the value of property.a inside some.env, but after I run it I gets replace for the value outside (property.a = "propA").
Does anybody have an idea how to solve this?
Thanks in advance

You can substitute it by using environment variables, like running your program with:
-Dproperty.a=mySubstituteValue

Related

Unable to use custom variables in Gradle extension

I'm using JIB (not super relevant) and I want to pass in variables from command line in my deployment script.
I append using -PinputTag=${DOCKER_TAG} -PbuildEnv=nonprod in my gradle command, which is cool. But when it's missing, I want that ternary to kick in.
I'm getting the error:
Could not get unknown property 'inputTag' for project ':webserver' of type org.gradle.api.Project.
def inputTag = inputTag ?: 'latest'
def buildEnv = buildEnv ?: 'nonprod'
jib {
container {
mainClass = 'com.example.hi'
}
to {
image = 'image/cool-image'
tags = ['latest', inputTag]
}
container {
creationTime = 'USE_CURRENT_TIMESTAMP'
ports = ['8080']
jvmFlags = ['-Dspring.profiles.active=' + buildEnv]
}
}
Found Solution
def inputTag = project.hasProperty('inputTag') ? project.property('inputTag') : 'latest'
def buildEnv = project.hasProperty('buildEnv') ? project.property('buildEnv') : 'nonprod'
This seems to be working, is this the best way?
How about this?
image = 'image/cool-image:' + (project.findProperty('inputTag') ?: 'latest')
Note jib.to.tags are additional tags. jib.to.image = 'image/cool-image' already implies image/cool-image:latest, so no need to duplicate latest in jib.to.tags.

Error: Multiple RestConsumerFactory found on classpath

Getting error while calling addRouteDefinition. I am dynamically adding rest to camelcontext.
Error
org.apache.camel.FailedToCreateRouteException : Failed to create route ... because of Multiple RestConsumerFactory found on classpath. Configure explicit which component to use
RestsDefinition rests = camelContext.loadRestsDefinition(is);
camelContext.addRestDefinitions(rests.getRests());
for (RestDefinition restDefinition : rests.getRests()) {
List<RouteDefinition> routeDefinitions = restDefinition.asRouteDefinition(camelContext);
System.out.println(routeDefinitions);
//camelContext.addRouteDefinitions(routeDefinitions);
for (RouteDefinition route1 : routeDefinitions) {
System.out.println("Route being Added : " + route1.getId());
//Getting Error in this line
camelContext.addRouteDefinition(route1);
}
}
Can anyone help me with this.
Thank you.
The problem was with RestConfiguration as the RestConfiguration was not set correctly on camelcontext so added. camelContext.addRestConfiguration(restConfiguration);

Accessing data in an object array with Java

I'm currently working with Cucumber and Java. I would like to retrieve that path of a file from ITestResult.
I'm currently retrieving the parameters with:
Object[] test = testResult.getParameters();
However the only thing I can access would seem the be the first objects name and nothing else.
test = {Object[1]#1492}
0 = {CucumberFeatureWrapper#1493} "Links at EDM Documents View,"
cucumberFeature = {CucumberFeature#1516}
path = "test/01-automation.feature"
feature = {Feature#1518}
cucumberBackground = null
currentStepContainer = {CucumberScenario#1519}
cucumberTagStatements = {ArrayList#1520} size = 1
i18n = {I18n#1521}
currentScenarioOutline = null
I cannot see anyway of retrieving path = "test/01-automation.feature" under cucumber feature.
Have you tried something like ((CucumberFeatureWrapper)test[0]).getCucumberFeature().getPath()?

Running a Java-based Spark Job on spark-jobserver

I need to run an aggregation Spark job using spark-jobserver using low-latency contexts. I have this Scala runner to run a job on using a Java method from a Java class.
object AggregationRunner extends SparkJob {
def main(args: Array[String]) {
val ctx = new SparkContext("local[4]", "spark-jobs")
val config = ConfigFactory.parseString("")
val results = runJob(ctx, config)
}
override def validate(sc: SparkContext, config: Config): SparkJobValidation = {
SparkJobValid;
}
override def runJob(sc: SparkContext, config: Config): Any = {
val context = new JavaSparkContext(sc)
val aggJob = new ServerAggregationJob()
val id = config.getString("input.string").split(" ")(0)
val field = config.getString("input.string").split(" ")(1)
return aggJob.aggregate(context, id, field)
}
}
However, I get the following error. I tried taking out the content returned in the Java method and am now just returning a test string, but it still doesn't work:
{
"status": "ERROR",
"result": {
"message": "Ask timed out on [Actor[akka://JobServer/user/context-supervisor/single-context#1243999360]] after [10000 ms]",
"errorClass": "akka.pattern.AskTimeoutException",
"stack": ["akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)", "akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)", "scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)", "scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:691)", "akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:467)", "akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:419)", "akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:423)", "akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375)", "java.lang.Thread.run(Thread.java:745)"]
}
}
I am not too sure why there is a timeout since I am only returning a string.
EDIT
So I figured out that the issue was occurring because I was using a Spark context that was created before updating a JAR. However, now that I try to use JavaSparkContext inside the Spark job, it returns to the error shown above.
What would be a permanent way to get rid of the error.
Also, would the fact that I am running a heavy Spark job on a local docker container be a plausible reason for the timeout.
For resolving ask time out issue, please add/change below properties in jobserver configuration file.
spray.can.server {
idle-timeout = 210 s
request-timeout = 200 s
}
for more information take a look at this https://github.com/spark-jobserver/spark-jobserver/blob/d1843cbca8e0d07f238cc664709e73bbeea05f2c/doc/troubleshooting.md

Rhadoop basic task on a single machine

I'm running the following code in Rhadoop:
Sys.setenv(HADOOP_HOME="/home/ashkan/Downloads/hadoop-1.0.3/")
Sys.setenv(HADOOP_BIN="/home/ashkan/Downloads/hadoop-1.0.3/bin/")
Sys.setenv(HADOOP_CONF_DIR="/home/ashkan/Downloads/hadoop-1.0.3/conf")
Sys.setenv(HADOOP_CMD="/home/ashkan/Downloads/hadoop-1.0.3/bin/hadoop")
library (Rhipe)
library(rhdfs)
library(rmr2)
hdfs.init()
small.ints = to.dfs(1:10)
mapreduce(
input = small.ints,
map = function(k, v)
{
lapply(seq_along(v), function(r){
x <- runif(v[[r]])
keyval(r,c(max(x),min(x)))
})})
How ever, I get the following error:
Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, :
hadoop streaming failed with error code 1
Does anyone know what the problem is?
Thanks a lot.
To fix the problem you'll have to set the HADOOP_STREAMING environment variable. The below code worked fine for me. Note that your code is not using Rhipe so no need to load.
R Code (I'm using hadoop 2.4.0)
Sys.setenv("HADOOP_CMD"="/usr/local/hadoop/bin/hadoop")
Sys.setenv("HADOOP_STREAMING"="/usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.4.0.jar")
library(rhdfs)
# Initialise
hdfs.init()
library(rmr2)
hdfs.init()
small.ints = to.dfs(1:10)
mapreduce(
input = small.ints,
map = function(k, v)
{
lapply(seq_along(v), function(r){
x <- runif(v[[r]])
keyval(r,c(max(x),min(x)))
})})
I'm guessing that your hadoop streaming path will be as below:
Sys.setenv("HADOOP_STREAMING"="/home/ashkan/Downloads/hadoop-1.0.3/contrib/streaming/hadoop-streaming-1.0.3.jar")
Hope this helps.

Categories

Resources