Blockquote
I am facing a problem with jobParameters in spring batch.I have a jobParameter which is optional.For the first time when i am passing job parameter through commandLineJobRunner it is working.For the second time i am not passing any jobParameter but still it is taking the previous jobParameter.When i clear my Meta-Data then jobParameter is coming as null i am not passing.How can i fix this without clearing the Meta-Data.Is this happens normally in spring batch
edited code
I am using MapJobRegistry and next is used while launching the job.When i debugged i have observed that to increment the run.id it is loading all the previous parameters
public JobParameters More ...getNext(JobParameters parameters) {
if (parameters == null) {
parameters = new JobParameters();
}
long id = parameters.getLong(key, 0L) + 1;
return new JobParametersBuilder(parameters).addLong(key, id).toJobParameters();
}
First thing to mention is that you should not use all the Map... classes. They are not intendend for production and, therefore, you better of using the different Jdbc implementations. If you don't wanna use a real DB, you can use always an inmemory DB.
But about your initial question:
You are using the CommandLineJobRunner togehter with the option "next".
Having a look at method CommandLineJobRunner.start() you find the following lines:
if (opts.contains("-next")) {
JobParameters nextParameters = getNextJobParameters(job);
Map<String, JobParameter> map = new HashMap<String, JobParameter>(nextParameters.getParameters());
map.putAll(jobParameters.getParameters());
jobParameters = new JobParameters(map);
}
You can see that getNextJobParameters is called. Inside this method you can see that the data of the previous run is loaded 'jobExplorer.getJobInstances(jobIdentifier, 0, 1);' (if there was a previous run). If there is a previous run, then the job-parameters of this old run are returned after applying the incrementer.next method -> hence, this is the reason you get your old parameters.
Now, this is the technical explanation but the question that follows is how you should use the "next" and "restart" option in order to get what you want.
using of next:
- next works only as expected if you launch a job with the same name and the same jobparameters. Otherwise the results can be confusing. Actually, I use next only inside units and integration tests
using of restart:
- you can use "restart" if the previous jobexecution with the same jobname failed. Also here, the jobparameters will be taken from your previous launch.
For a normal start of a job, you shouldn't be using next nor restart. A normal start of a job should always have a unique jobparameter. For instance, a "runid" whose value is changed with every start of the job. (otherwise, you would get JobInstanceAlready... Exception).
In case of unit tests, I use a unique "runId" for every testcase. And here, I'm using the "next" option.
Related
First, some background why I want this crazy thing. I'm building a Plugin in Jenkins that provides an API for scripts that are started from a pipeline-script to independently communicate with jenkins.
For example a shell-script can then tell jenkins to start a new stage from the running script.
I've got the communication between the script and Jenkins working, but the problem is that I now want to try and start a stage from a callback in my code but I can't seem to figure out how to do it.
Stuff I've tried and failed at:
Start a new StageStep.java
I can't seem to find a way to correctly instantiate and inject the step into the lifecycle. I've looked into DSL.java, but cant seem to get to an instance to call invokeStep(), nor was I able to find out how to instantiate DSL.java with the right environment.
Look at StageStepExecution.java and do what it does.
It seems to either invoke the body with an Environment Variable and nothing else, or set some actions and save the state in a config file when it has no body. I could not find out how the Pipeline: Stage View Plugin hooks into this, but it doesn't seem to read the config file. I've tried setting the Actions (even the inner class through reflection) but that did not seem to do anything.
Inject a custom string as Groovy body and call it with csc.newBodyInvoker()
A hacky solution I came up with was just generating the groovy script and running it like the ParallelStep does. But the sandbox does not allow me to call new GroovyShell().evaluate(""), and If I approve that call, the 'stage' step throws a MissingMethodException. So I also do not instatiate the script with the right environment. Providing the EnvironmentExpander does not make any difference.
Referencing and modifying workflow/{n}.xml
Changing the name of a stage in the relevant workflow/{n}.xml and rebooting the server updates the name of the stage, but modifying my custom stage to look like a regular one does not seem to add the step as a stage.
Stuff I've researched:
If some other plugin does something like this, but I couldn't find any example of plugins starting other steps.
How Jenkins handles the scripts and starts the steps, but It seems as though every step is directly called through the method name after the script is parsed, and I found no way to hook into this.
Other plugins using the StageView through other methods, but I could not find any.
add an AtomNode as a head onto the running thread, but I couldn't find how to replace/add the head and am hesitant to mess with jenkins' threading.
I've spent multiple days on this seemingly trivial call, but I can't seem to figure it out.
So the latest thing I tried actually worked, and is displayed correctly, but it ain't pretty.
I basically reimplemented the implementation of DSL.invokeStep(), which required me to use reflection A LOT. This is not safe, and will break with any changes of course so I'll open an issue in the Jenkins' ticket system in the hopes they will add a public interface for doing this. I'm just hoping this won't give me any weird side-effects.
// First, get some environment stuff
CpsThread cpsThread = CpsThread.current();
CpsFlowExecution currentFlowExecution = (CpsFlowExecution) getContext().get(FlowExecution.class);
// instantiate the stage's descriptor
StageStep.DescriptorImpl stageStepDescriptor = new StageStep.DescriptorImpl();
// now we need to put a new FlowNode as the head of the step-stack. This is of course not possible directly,
// but everything is also outside of the sandbox, so putting the class in the same package doesn't work
// get the 'head' field
Field cpsHeadField = CpsThread.class.getDeclaredField("head");
cpsHeadField.setAccessible(true);
Object headValue = cpsHeadField.get(cpsThread);
// get it's value
Method head_get = headValue.getClass().getDeclaredMethod("get");
head_get.setAccessible(true);
FlowNode currentHead = (FlowNode) head_get.invoke(headValue);
// crate a new StepAtomNode starting at the current value of 'head'.
FlowNode an = new StepAtomNode(currentFlowExecution, stageStepDescriptor, currentHead);
// now set this as the new head.
Method head_setNewHead = headValue.getClass().getDeclaredMethod("setNewHead", FlowNode.class);
head_setNewHead.setAccessible(true);
head_setNewHead.invoke(headValue, an);
// Create a new CpsStepContext, and as the constructor is protected, use reflection again
Constructor<?> declaredConstructor = CpsStepContext.class.getDeclaredConstructors()[0];
declaredConstructor.setAccessible(true);
CpsStepContext context = (CpsStepContext) declaredConstructor.newInstance(stageStepDescriptor,cpsThread,currentFlowExecution.getOwner(),an,null);
stageStepDescriptor.checkContextAvailability(context); // Good to check stuff I guess
// Create a new instance of the step, passing in arguments as a Map
Map<String, Object> stageArguments = new HashMap<>();
stageArguments.put("name", "mynutest");
Step stageStep = stageStepDescriptor.newInstance(stageArguments);
// so start the damd thing
StepExecution execution = stageStep.start(context);
// now that we have a callable instance, we set the step on the Cps Thread. Reflection to the rescue
Method mSetStep = cpsThread.getClass().getDeclaredMethod("setStep", StepExecution.class);
mSetStep.setAccessible(true);
mSetStep.invoke(cpsThread, execution);
// Finally. Start running the step
execution.start();
I'm using JMeter as a code (programmatic approach instead of GUI, with a Java Maven project) in order to stress-test an AWS Lambda Serverless API.
I've already developed a test plan, thread group, HTTPSamplerProxy and so on...
The execution of the calls to the API works perfect but is not the case e.g. for the DurationAssertion I've added to the HTTP Sampler..
I've also set a CSV file for the output where after execution I see everything ok (status code 200..), but the test should fail due to it is over the DurationAssertion I've configured (in addition to other assertion test elements).
I thought that perhaps I had to set "enabled" = true in the DurationAssertion object, but no effect.. Also, I've tried to access the JMeter Context in this way:
JMeterContextService.getContext().getPreviousResult()
I expected above code to retrieve a SampleResult (which has an AssertionResult collection), but the SampleResult is null..
A test plan with test elements (DurationAssertion in this case) without its respective analysis of the results of these assertions make no sense.. I want to see a failure message in each call that exceeds a certain threshold.. If I'd be using the JMeter GUI, I would add a ViewResultTree, which shows a Sampler Result view with detail of the request, response, and associated test asserts. And in addition to assertion result (per each request) I wanna see the request payload, full response, headers.. But in programmatic mode (without using the GUI).
So I would highly appreciate if anyone could give me some hint in how to accomplish this goal but by code.
UPDATE 1: I share a github snippet with the entire source code, such as UBIK LOAD PACK user suggested me:
https://gist.github.com/svillarreal/5eb90a66b8972633b95c249abb3566da
UPDATE 2: Inspection of context object (evaluated after JMeter engine finished run) - All null inside
UPDATE 3
i) I've recently found a jmeter.properties file, where I've configured the following properties:
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.assertion_results=all
And now the output as XML instead of CSV shows, at least, the sent request payload and the response data, which is VERY useful for analyse error cases.
ii) I did the inspection of JMeterContextService.getContext() inside the JMeterEngine execution instead of after it finishes run and then I could realize that there is one context per thread group, and during its run this object is full so now is clear why in UPDATE 2 all the properties are null..
Best regards and thanks!
I can think about at least one use case when your approach will not work: JMeter didn't receive response from the server at all.
For example if your server gets overloaded it might be the case that JMeter will never get response back therefore your Duration Assertion will simply not be applied as PostProcessors, Listeners and Assertions are not fired given that SampleResult is null.
So in order to be on the safe side I would recommend applying connect and response timeouts to your HTTP Request sampler(s)
HTTPSamplerProxy httpSampler = new HTTPSamplerProxy();
httpSampler.setConnectTimeout("3000");
httpSampler.setResponseTimeout("3000");
//etc.
If you have > 1 HTTP Request sampler in Test Plan it makes sense to go for HTTP Request Defaults instead of setting the timeouts individually.
Finally I could fix this. The issue was that I was managing erroneously the tree that is passed to the StandardJMeterEngine.
In JMeter everything is based on this tree, and like in the GUI, we should take care about how the elements are positioned in its hierarchy.
Analysing the library and debugging it intensely I've realized more in deep how JMeter works and I've understood that everything is managed starting from the HashTree. So the solution was to add the DurationAssertion and ResponseAssertion as HTTPSamplerProxy node's childs instead of putting them as HTTPSamplerProxy's test elements.
In particular, the method that fills the assertions to check after the execution is the following (and that let me know how to manage the hashtree):
// org.apache.jmeter.threads.TestCompiler
private void saveSamplerConfigs(Sampler sam) {
List<ConfigTestElement> configs = new LinkedList<>();
List<Controller> controllers = new LinkedList<>();
List<SampleListener> listeners = new LinkedList<>();
List<Timer> timers = new LinkedList<>();
List<Assertion> assertions = new LinkedList<>();
LinkedList<PostProcessor> posts = new LinkedList<>();
LinkedList<PreProcessor> pres = new LinkedList<>();
for (int i = stack.size(); i > 0; i--) {
addDirectParentControllers(controllers, stack.get(i - 1));
List<PreProcessor> tempPre = new LinkedList<>();
List<PostProcessor> tempPost = new LinkedList<>();
List<Assertion> tempAssertions = new LinkedList<>();
for (Object item : testTree.list(stack.subList(0, i))) {
if (item instanceof ConfigTestElement) {
configs.add((ConfigTestElement) item);
}
if (item instanceof SampleListener) {
listeners.add((SampleListener) item);
}
if (item instanceof Timer) {
timers.add((Timer) item);
}
if (item instanceof Assertion) {
tempAssertions.add((Assertion) item);
}
if (item instanceof PostProcessor) {
tempPost.add((PostProcessor) item);
}
if (item instanceof PreProcessor) {
tempPre.add((PreProcessor) item);
}
}
assertions.addAll(0, tempAssertions);
pres.addAll(0, tempPre);
posts.addAll(0, tempPost);
}
SamplePackage pack = new SamplePackage(configs, listeners, timers, assertions,
posts, pres, controllers);
pack.setSampler(sam);
pack.setRunningVersion(true);
samplerConfigMap.put(sam, pack);
}
Also I had to activate the following property:
jmeter.save.saveservice.assertion_results_failure_message=true
As a consequence now I have my CSV file report with the assertions results messages included in a exclusive column for that.
Well, issue resolved. ** I've updated the github snippet gist with the final solution ** Many thanks to all that read this post and tried to collaborate.
Best regards,
I have a situation where I want to execute a system process on each worker within Spark. I want this process to be run an each machine once. Specifically this process starts a daemon which is required to be running before the rest of my program executes. Ideally this should execute before I've read any data in.
I'm on Spark 2.0.2 and using dynamic allocation.
You may be able to achieve this with a combination of lazy val and Spark broadcast. It will be something like below. (Have not compiled below code, you may have to change few things)
object ProcessManager {
lazy val start = // start your process here.
}
You can broadcast this object at the start of your application before you do any transformations.
val pm = sc.broadcast(ProcessManager)
Now, you can access this object inside your transformation like you do with any other broadcast variables and invoke the lazy val.
rdd.mapPartition(itr => {
pm.value.start
// Other stuff here.
}
An object with static initialization which invokes your system process should do the trick.
object SparkStandIn extends App {
object invokeSystemProcess {
import sys.process._
val errorCode = "echo Whatever you put in this object should be executed once per jvm".!
def doIt(): Unit = {
// this object will construct once per jvm, but objects are lazy in
// another way to make sure instantiation happens is to check that the errorCode does not represent an error
}
}
invokeSystemProcess.doIt()
invokeSystemProcess.doIt() // even if doIt is invoked multiple times, the static initialization happens once
}
A specific answer for a specific use case, I have a cluster with 50 nodes and I wanted to know which ones have CET timezone set:
(1 until 100).toSeq.toDS.
mapPartitions(itr => {
sys.process.Process(
Seq("bash", "-c", "echo $(hostname && date)")
).
lines.
toIterator
}).
collect().
filter(_.contains(" CET ")).
distinct.
sorted.
foreach(println)
Notice I don't think it's guaranteed 100% you'll get a partition for every node so the command might not get run on every node, even using using a 100 elements Dataset in a cluster with 50 nodes like the previous example.
I am processing an avro file with a list of records and doing a client.put for each record to my local Aerospike store.
For some reason, put for a certain number of records is succeeding and it's not for the rest. I am doing this -
client.put(writePolicy, recordKey, bins);
The related values for the failed call are -
namespace = test
setname = test_set
userkey = some_string
write policy = null
Bins -
is_user:1
prof_loc:530049,530046,530032,530031,530017,530016,500046
rfm:Platinum
store_browsed:some_string
store_purch:some_string
city_id:null
Log Snippet -
com.aerospike.client.AerospikeException: Error Code 4: Parameter error
at com.aerospike.client.command.WriteCommand.parseResult(WriteCommand.java:72)
at com.aerospike.client.command.SyncCommand.execute(SyncCommand.java:56)
at com.aerospike.client.AerospikeClient.put(AerospikeClient.java:338)
What could possibly be the issue?
Finally. Resolved!
I was using the REPLACE RecordsExistsAction in this case. Any bin with null value will fail in this configuration. Aerospike treats a null value in a bin as equivalent to removing that bin value from the record for a key. Thus REPLACE configuration doesn't make sense for such an operation, and hence a parameter error - Invalid DB operation.
UPDATE config on the other hand will work perfectly fine.
Aerospike allows reading and writing with great flexibility. For developers to harness this functionality, Aerospike exposes great number of variables on both Policy and WritePolicy, which at times can be intimidating and error prone to beginners. Parameter error simply means that some of the configurations are not in coherence. An easy start would be to use the default read or write policy, which can be obtained by passing null as the policy.
Eg:
aeroClient.put(null, key, new Bin("binName", object));
Below is aerospike put method code snippet
public final void put(WritePolicy policy, Key key, Bin... bins) throws AerospikeException {
if (policy == null) {
policy = writePolicyDefault;
}
WriteCommand command = new WriteCommand(cluster, policy, key, bins, Operation.Type.WRITE);
command.execute();
}
I recently got this error, because the expiration value that I was using in writePolicy was more than the default expiration time for the cache
I am unit testing a Play Framework based application. As I read in the documentation, for the sake of clearing the state, before every test I reload the list of fixtures like this:
#Before
public void setUp() {
Fixtures.deleteAll();
Fixtures.load("data.yml");
Logger.info("FIXTURES RELOADED");
}
Then I go to the Web.based testing platform (http://localhost:9000/#tests), choose a test that deals with fetching some data (User u = User.findById(1l);) and then assert against the data. It works.
However, if I try to select the test again, and rerun it, it fails with:
A java.lang.NullPointerException has been caught, Try to read name on null object models.User
If I stop the application completely and restart it, it runs again (the first time), but starting and stopping takes a bit of time and is quite tedious, if you do it 10 times a minute.
I am using Play 1.2.5
The problem is auto-incrementing user ID (on every insert) while trying to get user with ID 1 on every test.
You can get the newly created user ID and use it in your test or find user by other field that you definitely know.