Proper way to trigger dependent job execution on Hudson - java

I have job A that is building after developers commit code (SCM change). I also have job B that should be run once a day (by cron) and it should use the artifact that results from execution of build A.
Is it possible to configure Hudson job B to run on cron and before it really executes it should trigger execution of job A?
Job A shouldn't know anything about job B.

Perhaps a better way to do what you want. Have Job A mark the files that Job B wants as Artifacts (preserves them between builds). Then have Job B on it's cron schedule and when it runs, it uses the Copy Artifact Plugin to retrieve the required files from Job A. Then Job B can do it's build operation.

If you have a maven project (which is also a good way to pass artifacts from one build to another), the M2-extra-steps jenkins plugin (now deprecated and integrated into the M2 plugin I think) allows you to do that:
As a pre-build step, add 'build another project', check 'lock until build is done' and that should do what you need.
If you do have a freestyle project - I'm not sure. If the equivalent does not exist you might be able to come up with something based on locks and latches.
All that said, why do you want to rebuild A before B if it hasn't changed since the last SCM commit?

Related

How to set `killSoftly` for a specific Jenkins job?

My Jenkins build hangs between build and post-build steps.
The console output shows there is a 6-minute wait (but I've seen waits of up to one hour):
10:53:26 BUILD FAILED in 1m 7s
10:53:26 4 actionable tasks: 4 executed
10:53:26 Build step 'Invoke Gradle script' changed build result to FAILURE
10:53:26 Build step 'Invoke Gradle script' marked build as failure
11:09:29 [CucumberReport] Using Cucumber Reports version 4.9.0
I found this and this questions that have similar issues, and they say the solution is setting -DSoftKillWaitSeconds=0 in jenkins.xml.
However, I need a way to set the option for particular jobs only, without messing with global Jenkins settings (I wouldn't want to mess with other projects).
EDIT:
When I manually abort the job, before the [CucumberReport] step, Cucumber reports are still generated.
I also checked Abort the build if it's stuck checkbox in Build Environment options, with Time-out strategy set to No Activity (Timeout seconds = 2).
When I build the project with these settings, the build will fail with "Aborted after 0 seconds" shown in Build History, as before, but the console output will be the same. (Nothing changes, Cucumber Reports will be generated but after a certain timeout).
It is not possible to select a job-specific value for SoftKillWaitSeconds (the value is derived from the Jenkins core at a point where the job name is not known).
My recommendation is to fix the abort handling in your job itself, so it will not depend on a "soft kill timeout". If you're running on a Unix-ish system, you can ensure this by running your job in a new process group (set -m in bash) and (for example) setting up a proper exit trap.
We are using the Build-timeout plugin to kill stuck jobs with timeout strategy set to No Activity or Absolute. For me, this is a good approach when you are using freestyle projects.
The reason why your build is "Aborted after 0 seconds" is that most likely there are unfinished child processes.
From documentation:
Because Java only allows threads to be interrupted at a set of fixed
locations, depending on how a build hangs, the abort operation might
not take effect. For example,
if Jenkins is waiting for child processes to complete, it can abort
right away.
if Jenkins is stuck in an infinite loop, it can never be
aborted.
if Jenkins is doing a network or file I/O within the Java VM
(such as lengthy file copy or SVN update), it cannot be aborted.
You could try the absolute timeout strategy. You can define a global variable, so that you do not repeat the timeout value in jobs:
Go to "Manage Jenkins" > "Configure System".
Check "Environment variables" in "Global properties".
Add an environment variable name="GLOBAL_TIMEOUT_MINUTES" value="20".
Go to a configuration page of a project.
Check "Abort the build if it's stuck" in "Build Environment".
Select "Absolute" for "Time-out strategy". Of course, also applicable to other strategies.
Set "${GLOBAL_TIMEOUT_MINUTES}" for "Timeout".
Set timeout action "Abort the build".
If this is not working, you could try to look in the logs https://your-jenkins-server/log or in a thread dump.
The hanging may be caused by new/old version of a plugin. Try to find what are the unfinished child processes. Try to disable post-build actions one by one to find the one that may be the cause of the issue.
You can see https://superuser.com/questions/1401879/debugging-what-happens-when-a-jenkins-build-unexpectedly-pauses-or-hangs

multiple job plugin running job multiple time - Jenkins

I'm using multijob plugin, Where I have created a Job using it and configured 2 Job in it as shown here:
I've configured to run both job sequentially. But while I do build Its execution order is
Job 1
Job 2
Then Again
Job 2
Job 1
I want it to be configure, the execution should be only one time Job 1 and Job 2. How do I configure here ?
Steps to trigger multiple job using Multi Job Plug In:
In your first job that is "Rcontact_Dashboard_testSuite" click on Add build step
Then select trigger/call builds o other project
on Projects to build field just type "Rcontact_Main_TestSuite" that is your second job.
Then build the first job
This will run your first job after that it will trigger your second job.
Hop this will solve your issue. let me know your feedback.
Below configuration working for me.
I have used Multijob Plugin and configured my both job in different phases as shown Here

Splitting complex builds in Jenkins

Up until now I have always created large builds, e.g. "checkin build" which simply assures that the code compiles and all tests are fine, jira and so forth (also reports like coverage, checkstyle etc.).
The I have another large build, "nightly", which does the same as above but also maven site, javadoc - that is, tasks which run for a longer time - that is, does a new checkout and builds everything again (ever night if changes in source control was registered).
Now I would like to do a "build for production", which more or less should do the same as "nightly", with the extension that it should tag and produce an artifact ready for deployment, bump version and so forth. Unfortunately I don't always have the time to wait for maven site etc to be produced, but I still need them for documentation purposes. I've been looking at build pipelines and inheritance plugin, but I don't know what is pro/cons with these - I'm missing a "best practice" here.
If I could have it my way I would like a build like "check build", then a new job does tagging and release of new version (e.g. release plugin), then a new job starts the "reporting" stuff and finally a job which creates maven sites etc., but I would only like to do one checkout. All these builds should then be triggered by the previous one which was build successfully. I have then been looking at "copy workspace", but this feels like the wrong way to do this.
Any input, ideas, experience etc. is much appreciated.
Re "[...] "copy workspace", but this feels like the wrong way to do this."
Looks like the Shared workspace plugin is what you are looking for.

Eclipse. create aggregate maven task. Can eclipse-maven plugin it?

I have maven projects, opened in my Eclipse.
After each code changing I should to launch task1, task2, .... taskN.
Each task executes from different folder.
Is it possible to create task which would invoke task1 then task2 .... taskN ?
If task45 ended in failure then from task46 to taskN is not launch.
UPDATE
I created aggregate task:
This feature does not seem to be supported by eclipse yet (Apr 2014). It is a known issue and a ticket is already open for it.
However, you can have a look to the "launch group" feature from the CDT plugin. It seems to be compatible with any launch configuration, including maven, and it should offer better control on sequantial execution of your tasks.

Execute a default task in ANT in case of failure

I'm currently using ANT for building my Java project on a Windows XP machine.
In my build.xml file I've defined 3 task and I would like that,in case of fail,a default task be executed before closing the building and exiting (like a recovery procedure). I would like to know if it's possible.
thanks
Googled and found this. It's basically a try/catch for Ant. Might be worth a look: http://ant-contrib.sourceforge.net/tasks/tasks/trycatch.html
Never heard of such a property/task, but the follwing just comes to my mind: you could use an additional 'Master' ant script.
The master script (a new one) includes all public targets from the original one and delegates the work to the corresponding task in your build script (ant calls)
If the delegate fails, the master should be able to recognize the failure and could call the 'clean-up' task (either on the 'master' or on the original build file)

Categories

Resources