I have Jenkins cluster, has 1-Master with 2-Executor and 1-Agent with 2-Executor and now how to get the total number of executors in my Jenkins cluster using java or groovy script?
If you have access to the script console, you can run something like this:
final jenkins = Jenkins.instance
jenkins.computers.inject(0) { acc, item ->
acc + item.numExecutors
}
If you are running this in a sandboxed pipeline, you will have to have the methods whitelisted by an administrator in the In-Process Script Approval (or through a plugin using whitelists) at http://jenkinsUrl/scriptApproval/. You won't be able to use inject right now because of JENKINS-26481, but your pipeline script might look like:
final jenkins = Jenkins.instance
int executorCount = 0
for (def computer in jenkins.computers) {
executorCount += computer.numExecutors
}
// Rest of pipeline
If your pipeline does not run in sandbox, you may have access to those objects without whitelisting.
Related
I have written code that leverages Azure SDK for Blobs in order to interact with the blob storage.
As a clever and dutiful developer, I have not tested my code by navigating the live application, but rather created a Spring Boot JUnit test and spent a few hours fixing all my mistakes. I didn't use anyh kind of mocking, in fact, as my problem was using the library the correct way. I ran the code against a live instance of a blob storage and checked that all my Java methods worked as expected.
I am writing here because
To call it a day, I hardcoded the credentials in my source files. The repository is a company-private repository, not that harm. Credentials can be rotated, developers can all access from Azure portal and get the credentials. But still I don't like the idea of pushing credentials into code
Having these junit tests work on Azure DevOps pipelines could be some of a good idea
I know from the very beginning that hardcoding credentials into code is a worst practice, but since this morning I wanted to focus on my task. Now I want to adopt the best practices. I am asking about redesigning the test structure
Testing code is this.
The code creates an ephemeral container and tries to store/retrieve/delete blobs. It uses a GUID to create a unique private workspace, to clear after test is finished.
#SpringBootTest(classes = FileRepositoryServiceAzureBlobImplTest.class)
#SpringBootConfiguration
#TestConfiguration
#TestPropertySource(properties = {
"azure-storage-container-name:amlcbackendjunit",
"azure-storage-connection-string:[not going to post it on Stackoverflow before rotating it]"
})
class FileRepositoryServiceAzureBlobImplTest {
private static final Resource LOREM_IPSUM = new ClassPathResource("loremipsum.txt", FileRepositoryServiceAzureBlobImplTest.class);
private FileRepositoryServiceAzureBlobImpl uut;
private BlobContainerClient blobContainerClient;
private String loremChecksum;
#Value("${azure-storage-connection-string}")
private String azureConnectionString;
#Value("${azure-storage-container-name}")
private String azureContainerName;
#BeforeEach
void beforeEach() throws IOException {
String containerName = azureContainerName + "-" + UUID.randomUUID();
blobContainerClient = new BlobContainerClientBuilder()
.httpLogOptions(new HttpLogOptions().setApplicationId("az-sp-sb-aml"))
.clientOptions(new ClientOptions().setApplicationId("az-sp-sb-aml"))
.connectionString(azureConnectionString)
.containerName(containerName)
.buildClient()
;
blobContainerClient.create();
uut = spy(new FileRepositoryServiceAzureBlobImpl(blobContainerClient));
try (InputStream loremIpsumInputStream = LOREM_IPSUM.getInputStream();) {
loremChecksum = DigestUtils.sha256Hex(loremIpsumInputStream);
}
blobContainerClient
.getBlobClient("fox.txt")
.upload(BinaryData.fromString("The quick brown fox jumps over the lazy dog"));
}
#AfterEach
void afterEach() throws IOException {
blobContainerClient
.delete();
}
#Test
void store_ok() {
String desiredFileName = "loremIpsum.txt";
FileItemDescriptor output = assertDoesNotThrow(() -> uut.store(LOREM_IPSUM, desiredFileName));
assertAll(
() -> assertThat(output, is(notNullValue())),
() -> assertThat(output, hasProperty("uri", hasToString(Matchers.startsWith("azure-blob://")))),
() -> assertThat(output, hasProperty("size", equalTo(LOREM_IPSUM.contentLength()))),
() -> assertThat(output, hasProperty("checksum", equalTo(loremChecksum))),
() -> {
String localPart = substringAfter(output.getUri().toString(), "azure-blob://");
assertAll(
() -> assertTrue(blobContainerClient.getBlobClient(localPart).exists())
);
}
);
}
}
In production (but also in SIT/UAT), the real Spring Boot application will get the configuration from the Container environment, including the storage connection string. Yes, for this kind of test I could also avoid using Spring and #TestPropertySource, because I'm not leveraging any bean from the context.
Question
I want to ask how can I amend this test in order to
Decouple the connection string from code
Softly-ignore the test if for some reason the connection string is not present (e.g. developer downloaded the project the first time and wants to kick-start) (note 1)
Integrate this test (with a working connection string) from Azure DevOps pipelines, where I can configure virtually any environment variable and such
Here is the build job comprised of tests
- task: Gradle#2
displayName: Build with Gradle
inputs:
gradleWrapperFile: gradlew
gradleOptions: -Xmx3072m $(gradleJavaProperties)
options: -Pci=true -PbuildId=$(Build.BuildId) -PreleaseType=${{parameters.releaseType}}
jdkVersionOption: 1.11
jdkArchitectureOption: x64
publishJUnitResults: true
sqAnalysisEnabled: true
sqGradlePluginVersionChoice: specify
sqGradlePluginVersion: 3.2.0
testResultsFiles: '$(System.DefaultWorkingDirectory)/build/test-results/**/TEST-*.xml'
tasks: clean build
Note 1: the live application can be kick-started without the storage connection string. It falls back to a local temporary directory.
The answer is a bit complex to explain, so I did my best
TL;DR
Note that the original variable names are redacted and YMMV if you try to recreate the example with the exact keys I used
Create a secret pipeline variable containing the connection string, and bury* it into the pipeline
Example name testStorageAccountConnectionString
Change the Gradle task
- task: Gradle#3
displayName: Build with Gradle
inputs:
gradleWrapperFile: gradlew
gradleOptions: -Xmx10240m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8 -DAZURE_STORAGE_CONNECTION_STRING=$(AZURE_STORAGE_CONNECTION_STRING)
options: --build-cache -Pci=true -PgitCommitId=$(Build.SourceVersion) -PbuildId=$(Build.BuildId) -Preckon.stage=${{parameters.versionStage}} -Preckon.scope=${{parameters.versionScope}}
jdkVersionOption: 1.11
jdkArchitectureOption: x64
publishJUnitResults: true
sqAnalysisEnabled: true
sqGradlePluginVersionChoice: specify
sqGradlePluginVersion: 3.2.0
testResultsFiles: '$(System.DefaultWorkingDirectory)/build/test-results/**/TEST-*.xml'
tasks: clean build
env:
AZURE_STORAGE_CONNECTION_STRING: $(testStorageAccountConnectionString)
Explanation
Spring Boot accepts placeholder ${azure.storageConnectionString} from an environment variable AZURE_STORAGE_CONNECTION_STRING. Please read the docs and try it locally first. This means we need to run the test with an environment variable propely set in order to resolve the placeholder
Gradle can run with -D to add an environment variable. -DAZURE_STORAGE_CONNECTION_STRING=$(AZURE_STORAGE_CONNECTION_STRING) adds an environment variable AZURE_STORAGE_CONNECTION_STRING to the test run equal to the pipeline environment variable AZURE_STORAGE_CONNECTION_STRING (not that fantasy)
Azure DevOps pipelines protect secret variables from unwanted access. We created the pipeline variable as secret, so there is another trick to do first
Gradle's env attributes set environment variable for the pipeline container. In this case, we make sure that Gradle runs with AZURE_STORAGE_CONNECTION_STRING set to testStorageAccountConnectionString. Env is the only place where Azure pipelines agent will resolve and set free the content of the secret variable
Secrets cannot be retrieved any more from web interface. Azure Pipelines are designed for this
I have a Jenkins pipeline;
#Library('sharedLib#master')
import org.foo.point
pipeline {
agent { label 'slaveone' }
// agent { label 'master' }
stages {
stage('Data Build'){
steps{
script{
def Point = new point()
Point.hello("mememe")
}
}
}
}
}
which runs a small bit of code in a library called 'jenkins-shared-library/src/sharedLib';
package org.foo
import java.io.File
class point{
def hello(name){
File saveFile = new File("c:/temp/jenkins_log.txt")
saveFile.write "hello"
}
}
It runs fine on both 'master' and 'slaveone', but in both cases the 'jenkins_log.txt' file appears on the master. The log file contains this;
Running on slaveone in d:\Jenkins_WorkDir\workspace\mypipeline
How is this code running on slaveone and writing files to master?
Edit: I should also mention that this is my third attempt at doing this. The first one was with Groovy code direct in the pipeline, and the second was using a 'def' type call in the vars directory. Both produced the same behaviour, seemingly oblivious to the agent it was being run on.
I think everything inside the script runs on master, but here I found a workaround: Jenkins Declarative Pipeline, run groovy script on slave agent
Jenkins stores all logs on master only, that's why you cannot find any log on nodes.
I'm writing a program that accesses a Spark cluster as a client. It connects like this:
val sc = new SparkContext(new SparkConf(loadDefaults = false)
.setMaster(sparkMasterEndpoint)
.setAppName("foo")
.set("spark.cassandra.connection.host", cassandraHost)
.setJars(Seq("target/scala-2.11/foo_2.11-1.0.jar"))
)
And that context is then used to run operations on Spark. However, any lambdas / anonymous functions I use in my code can't run on Spark. For example, I might have:
val groupsDescription = sc.someRDD()
.groupBy(x => x.getSomeString())
.map(x => x._1 + " " + x._2.count(_ => true))
This returns a lazily evaluated RDD, but when I try to extract some value from that RDD, I get this exception from Spark:
java.lang.ClassNotFoundException: my.app.Main$$anonfun$groupify$1$$anonfun$2$$anonfun$apply$1
Even though I've supplied my application's jar file to Spark. I even see a log line (in my application, not in my spark cluster) telling me the jar has been uploaded like this:
[info] o.a.s.SparkContext - Added JAR target/scala-2.11/foo_2.11-1.0.jar at spark://192.168.51.15:53575/jars/foo_2.11-1.0.jar with timestamp 1528320841157
I can find absolutely NOTHING on this subject anywhere, and it's driving me crazy! How has nobody else run in to this issue? All the related results I see are about bundling your jars for use with spark-submit which is not what I'm doing, I have a standalone application that's connecting to an independent spark cluster. Is this simply not supported? What else could I be missing? What else could be causing this?
I'm using jenkins as CI tool. I used restful api to build a job remotely but I don't know how to get test result remotely as well.
I can't be more thankful if anybody know a solution
Use the XML or Json API. At most pages on Jenkins you can add /api/ to the url and get data in xml, json and similar formats. So for a job you can go to <Jenkins URL>/job/<Job Name>/api/xml and get informaiton about the job, builds, etc. For a build you can go to <Jenkins URL>/job/<Job Name>/<build number>/api/xml and you will get a summary for the build. Note that you can use the latestXXXBuild in order to get the latest successful, stable, failing, complete build, like this; <Jenkins URL>/job/<Job Name>/lastCompletedBuild/api/xml.
Additionally if youre using any plugin which publishes test results to the build, then for a given job you can go to <Jenkins URL>/job/<Job Name>/lastCompletedBuild/testReport/api/xml and you will get an xml report with results.
There is a lot more to it, you can control what is exported with the tree parameter and depth parameter. For a summary go to <Jenkins URL>/api/
Well, if you are using a jenkins shared library or decided to permit the security exceptions (a less good approach) then you can access them via a job and send them out to whatever you like - push vs pull
def getCurrentBuildFailedTests() {
def failedTests = []
def build = currentBuild.build()
def action = build.getActions(hudson.tasks.junit.TestResultAction.class)
if (action) {
def failures = build.getAction(hudson.tasks.junit.TestResultAction.class).getFailedTests()
println "${failures.size()} Test Results Found"
for (def failure in failures) {
failedTests.add(['name': failure.name, 'url': failure.url, 'details': failure.errorDetails])
}
}
return failedTests
}
I have a pipeline job using Groovy script set up to run multiple tests in "parallel", but I am curious as to how to get the report(s) unified.
I am coding my Selenium tests in Java and using TestNG and Maven.
When I look at the report in target/surefire-reports, the only thing there is the "last" test ran of "suite".
How can I get a report that combines all of the tests within the Pipeline parallel job?
Example Groovy code:
node() {
try {
parallel 'exampleScripts':{
node(){
stage('ExampleScripts') {
def mvnHome
mvnHome = tool 'MAVEN_HOME'
env.JAVA_HOME = tool 'JDK-1.8'
bat(/"${mvnHome}\bin\mvn" -f "C:\workspace\Company\pom.xml" test -DsuiteXmlFile=ExampleScripts.xml -DenvironmentParam="$ENVIRONMENTPARAM" -DbrowserParam="$BROWSERPARAM" -DdebugParam="false"/)
} // end stage
} // end node
}, // end parallel
'exampleScripts2':{
node(){
stage('ExampleScripts2') {
def mvnHome
mvnHome = tool 'MAVEN_HOME'
env.JAVA_HOME = tool 'JDK-1.8'
bat(/"${mvnHome}\bin\mvn" -f "C:\workspace\Company\pom.xml" test -DsuiteXmlFile=ExampleScripts2.xml -DenvironmentParam="$ENVIRONMENTPARAM" -DbrowserParam="$BROWSERPARAM" -DdebugParam="false"/)
} // end stage
} // end node
step([$class: 'Publisher', reportFilenamePattern: 'C:/workspace/Company/target/surefire-reports/testng-results.xml'])
} // end parallel
There is a little more to this code after this in terms of emailing the test runner the result of the test and such.
This works great, other than the reporting aspect.
I prefer to use ExtentReports because it has a ExtentX server that allows to you report on multiple different test reports.
I used to use ReportNG but development on that stalled and so I don't recommend it any more. It doesn't allow you combine reports anyway.
Other than that, you could use CouchBase or similar JSON database to store test results and then generate your own report from that information.