I'm using JIB (not super relevant) and I want to pass in variables from command line in my deployment script.
I append using -PinputTag=${DOCKER_TAG} -PbuildEnv=nonprod in my gradle command, which is cool. But when it's missing, I want that ternary to kick in.
I'm getting the error:
Could not get unknown property 'inputTag' for project ':webserver' of type org.gradle.api.Project.
def inputTag = inputTag ?: 'latest'
def buildEnv = buildEnv ?: 'nonprod'
jib {
container {
mainClass = 'com.example.hi'
}
to {
image = 'image/cool-image'
tags = ['latest', inputTag]
}
container {
creationTime = 'USE_CURRENT_TIMESTAMP'
ports = ['8080']
jvmFlags = ['-Dspring.profiles.active=' + buildEnv]
}
}
Found Solution
def inputTag = project.hasProperty('inputTag') ? project.property('inputTag') : 'latest'
def buildEnv = project.hasProperty('buildEnv') ? project.property('buildEnv') : 'nonprod'
This seems to be working, is this the best way?
How about this?
image = 'image/cool-image:' + (project.findProperty('inputTag') ?: 'latest')
Note jib.to.tags are additional tags. jib.to.image = 'image/cool-image' already implies image/cool-image:latest, so no need to duplicate latest in jib.to.tags.
Related
I have a lambda function that is written in nodejs (using AWS nodejs SDK) and we are using gradle (build.gradle) to package and deploy that to AWS.
Deployment is working fine but when I am trying to update tags on this Lambda and redeploying then the new tags are not being applied to that function.
So if I am deploying Lambda using build.gradle first time with TAG A then its working fine (By first time I mean when I am creating a new Lambda) and tag is being applied and I can see that on AWS Console. But when I am re-deploying it again by adding another tag "TAG B" then the new tag is not being applied and it is not updating the existing Lambda tags. Any idea or suggestion what I am doing wrong? Thanks
Below is the section of build.gradle file where I am applying the tags for Lambda.
createOrUpdateFunction {
handler = 'index.handler'
role = cfo.DataDigestContentSearchLambdaIamRoleArn
runtime = 'python3.7'
timeout = 10
tags = [
tagA: 'data-team',
tagB: 'someValue'
]
Here is the complete build.gradle file
import com.amazonaws.services.lambda.model.TagResourceRequest
apply plugin: 'com.abc.gradle.nodejs.yarn'
apply plugin: 'com.abc.gradle.aws.lambda.deployment'
apply from: rootProject.file('gradle/yarn-webpacked-lambda.gradle')
nodejs {
packaging {
name = '#abc/data-fulfillment-facebook-lambda'
dependency project(':core:js')
dependency project(':platforms:facebook:js:core')
dependency '#abc/data-ingest-api', abc_INGEST_SEMVER
dependency '#abc/data-ingest-core', abc_INGEST_SEMVER
dependency 'aws-sdk', AWS_SEMVER
}
}
webpackPackageJson.dependencies << ['fb': FB_SEMVER]
lambdaRepository {
artifactName = 'data-fulfillment-facebook'
}
lambda {
functionName = "${config.aws.target.envPrefix}-${lambdaRepository.artifactName}"
}
def cfo = cloudformation.outputs as Map<String, String>
createOrUpdateFunction {
handler = 'index.handler'
role = cfo.DataFulfillmentFacebookLambdaIamRoleArn
runtime = 'nodejs12.x'
memorySize = 256
timeout = 30
tags = [
team: 'data-team',
name: 'someName'
]
environmentAsMap << [
FACEBOOK_LEDGER_TABLE_NAME: cfo.DataFulfillmentLedgerTableName,
INSTAGRAM_DISCOVERY_TOKEN_SECRET: cfo.DataFulfillmentInstagramDiscoveryTokenSecretName
]
}
task registerTaskProcessor(type: RegisterTaskProcessorTask) {
client 'target'
tableName = cfo.DataFulfillmentTaskProcessorRegistryV2TableName
entryName = 'facebook'
rules << [regex: ['^facebook', [var: 'task.type']]]
type = 'lambda-dispatch'
params << [functionName: lambda.functionName, qualifier: 'live']
}
I'm trying to execute a such command in the console:
./gradlew cucumber -Pthreads=80 -Ptags=#ALL_API_TESTS
in the build.gradle:
cucumber {
threads = "$threads"
glue = 'classpath:com.sixtleasing.cucumber.steps'
plugin = ['pretty']
tags = "$tags"
featurePath = 'src/main/resources/feature'
main = 'cucumber.api.cli.Main'
}
but it doesnt work :( How can I fix it?
Your original expression set threads to a String value, when it is clearly a numeric one, so you need to use something like:
int threadsNum = "$threads".toInteger()
cucumber {
threads = threadsNum
glue = 'classpath:com.sixtleasing.cucumber.steps'
plugin = ['pretty']
tags = "$tags"
featurePath = 'src/main/resources/feature'
main = 'cucumber.api.cli.Main'
}
Hope this helps.
I'm having trouble with the substitution using lightbend config library .
I have an application.conf file with this content:
property.a = "propA"
list =
[
{
nameProp=one,
propToReplace = ${property.a}
},
{
nameProp=two,
propToReplace = ${property.a}
}
]
some.env {
property.a = "propEnvironment"
}
At some point in the code, I'm loading the property file using Configuration.load().
My goal is to subtitute the propToReplace with the value of property.a inside some.env, but after I run it I gets replace for the value outside (property.a = "propA").
Does anybody have an idea how to solve this?
Thanks in advance
You can substitute it by using environment variables, like running your program with:
-Dproperty.a=mySubstituteValue
I've the following code:
fun checkoutBranch(path: Path, name: String) {
Git.open(path.toFile()).use { git ->
val branchExists = git
.branchList()
.setListMode(ListBranchCommand.ListMode.ALL)
.call()
.filterNot { it.name.startsWith("refs/remotes/") }
.map { it.name }
.any { it.endsWith(name) }
val ref = git
.checkout()
.setCreateBranch(!branchExists)
.setName(name)
.setUpstreamMode(CreateBranchCommand.SetupUpstreamMode.TRACK)
.call()
}
}
When I call it with name = master, everything works as expected. A subsequent call with name = test causes a new branch to be created, but ref is null. Looking at CheckoutCommand#L285, it seems that ref.name = refs/heads/master for master, but for test, ref.name = refs/tags/test, and ref is then set to null.
Ref ref = repo.findRef(name);
if (ref != null && !ref.getName().startsWith(Constants.R_HEADS))
ref = null;
What is happening here? Is this the expected behavior for a new branch? By going into the repo, I can see that it is in detached HEAD state, perhaps causing this issue.
Thanks to #ElpieKay for the clue. The problem was caused by the presence of a tag named test. Apparently, JGit prefers tags over branches when looking for references.
I solved the issue by explicitly specifying the branch name as refs/heads/test (in setName).
I need to run an aggregation Spark job using spark-jobserver using low-latency contexts. I have this Scala runner to run a job on using a Java method from a Java class.
object AggregationRunner extends SparkJob {
def main(args: Array[String]) {
val ctx = new SparkContext("local[4]", "spark-jobs")
val config = ConfigFactory.parseString("")
val results = runJob(ctx, config)
}
override def validate(sc: SparkContext, config: Config): SparkJobValidation = {
SparkJobValid;
}
override def runJob(sc: SparkContext, config: Config): Any = {
val context = new JavaSparkContext(sc)
val aggJob = new ServerAggregationJob()
val id = config.getString("input.string").split(" ")(0)
val field = config.getString("input.string").split(" ")(1)
return aggJob.aggregate(context, id, field)
}
}
However, I get the following error. I tried taking out the content returned in the Java method and am now just returning a test string, but it still doesn't work:
{
"status": "ERROR",
"result": {
"message": "Ask timed out on [Actor[akka://JobServer/user/context-supervisor/single-context#1243999360]] after [10000 ms]",
"errorClass": "akka.pattern.AskTimeoutException",
"stack": ["akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)", "akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)", "scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)", "scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:691)", "akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:467)", "akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:419)", "akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:423)", "akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375)", "java.lang.Thread.run(Thread.java:745)"]
}
}
I am not too sure why there is a timeout since I am only returning a string.
EDIT
So I figured out that the issue was occurring because I was using a Spark context that was created before updating a JAR. However, now that I try to use JavaSparkContext inside the Spark job, it returns to the error shown above.
What would be a permanent way to get rid of the error.
Also, would the fact that I am running a heavy Spark job on a local docker container be a plausible reason for the timeout.
For resolving ask time out issue, please add/change below properties in jobserver configuration file.
spray.can.server {
idle-timeout = 210 s
request-timeout = 200 s
}
for more information take a look at this https://github.com/spark-jobserver/spark-jobserver/blob/d1843cbca8e0d07f238cc664709e73bbeea05f2c/doc/troubleshooting.md