Correct usage of junit-toolbox MultithreadingTester - java

Background
I have read the documentation for the MultithreadingTester provided in junit-toolbox and still don't entirely grok if I'm using it correctly.
How I would expect it to work:
MultithreadingTester.add(callable1, callable2)
// validate my results from running callable1 and callable2
.run();
Where callable1 and callable2 would be executed and then I could verify the results ... then the MultithreadingTester would run the two callables again ... and once again I'd verify the results.
From what I've put together my expectations are misaligned with what's provided.
Question
Does someone have a simple example of using the MultithreadingTester to validate a synchronized code path?

Related

jenkins plugin - Starting and stopping a Stage from inside a plugin

First, some background why I want this crazy thing. I'm building a Plugin in Jenkins that provides an API for scripts that are started from a pipeline-script to independently communicate with jenkins.
For example a shell-script can then tell jenkins to start a new stage from the running script.
I've got the communication between the script and Jenkins working, but the problem is that I now want to try and start a stage from a callback in my code but I can't seem to figure out how to do it.
Stuff I've tried and failed at:
Start a new StageStep.java
I can't seem to find a way to correctly instantiate and inject the step into the lifecycle. I've looked into DSL.java, but cant seem to get to an instance to call invokeStep(), nor was I able to find out how to instantiate DSL.java with the right environment.
Look at StageStepExecution.java and do what it does.
It seems to either invoke the body with an Environment Variable and nothing else, or set some actions and save the state in a config file when it has no body. I could not find out how the Pipeline: Stage View Plugin hooks into this, but it doesn't seem to read the config file. I've tried setting the Actions (even the inner class through reflection) but that did not seem to do anything.
Inject a custom string as Groovy body and call it with csc.newBodyInvoker()
A hacky solution I came up with was just generating the groovy script and running it like the ParallelStep does. But the sandbox does not allow me to call new GroovyShell().evaluate(""), and If I approve that call, the 'stage' step throws a MissingMethodException. So I also do not instatiate the script with the right environment. Providing the EnvironmentExpander does not make any difference.
Referencing and modifying workflow/{n}.xml
Changing the name of a stage in the relevant workflow/{n}.xml and rebooting the server updates the name of the stage, but modifying my custom stage to look like a regular one does not seem to add the step as a stage.
Stuff I've researched:
If some other plugin does something like this, but I couldn't find any example of plugins starting other steps.
How Jenkins handles the scripts and starts the steps, but It seems as though every step is directly called through the method name after the script is parsed, and I found no way to hook into this.
Other plugins using the StageView through other methods, but I could not find any.
add an AtomNode as a head onto the running thread, but I couldn't find how to replace/add the head and am hesitant to mess with jenkins' threading.
I've spent multiple days on this seemingly trivial call, but I can't seem to figure it out.
So the latest thing I tried actually worked, and is displayed correctly, but it ain't pretty.
I basically reimplemented the implementation of DSL.invokeStep(), which required me to use reflection A LOT. This is not safe, and will break with any changes of course so I'll open an issue in the Jenkins' ticket system in the hopes they will add a public interface for doing this. I'm just hoping this won't give me any weird side-effects.
// First, get some environment stuff
CpsThread cpsThread = CpsThread.current();
CpsFlowExecution currentFlowExecution = (CpsFlowExecution) getContext().get(FlowExecution.class);
// instantiate the stage's descriptor
StageStep.DescriptorImpl stageStepDescriptor = new StageStep.DescriptorImpl();
// now we need to put a new FlowNode as the head of the step-stack. This is of course not possible directly,
// but everything is also outside of the sandbox, so putting the class in the same package doesn't work
// get the 'head' field
Field cpsHeadField = CpsThread.class.getDeclaredField("head");
cpsHeadField.setAccessible(true);
Object headValue = cpsHeadField.get(cpsThread);
// get it's value
Method head_get = headValue.getClass().getDeclaredMethod("get");
head_get.setAccessible(true);
FlowNode currentHead = (FlowNode) head_get.invoke(headValue);
// crate a new StepAtomNode starting at the current value of 'head'.
FlowNode an = new StepAtomNode(currentFlowExecution, stageStepDescriptor, currentHead);
// now set this as the new head.
Method head_setNewHead = headValue.getClass().getDeclaredMethod("setNewHead", FlowNode.class);
head_setNewHead.setAccessible(true);
head_setNewHead.invoke(headValue, an);
// Create a new CpsStepContext, and as the constructor is protected, use reflection again
Constructor<?> declaredConstructor = CpsStepContext.class.getDeclaredConstructors()[0];
declaredConstructor.setAccessible(true);
CpsStepContext context = (CpsStepContext) declaredConstructor.newInstance(stageStepDescriptor,cpsThread,currentFlowExecution.getOwner(),an,null);
stageStepDescriptor.checkContextAvailability(context); // Good to check stuff I guess
// Create a new instance of the step, passing in arguments as a Map
Map<String, Object> stageArguments = new HashMap<>();
stageArguments.put("name", "mynutest");
Step stageStep = stageStepDescriptor.newInstance(stageArguments);
// so start the damd thing
StepExecution execution = stageStep.start(context);
// now that we have a callable instance, we set the step on the Cps Thread. Reflection to the rescue
Method mSetStep = cpsThread.getClass().getDeclaredMethod("setStep", StepExecution.class);
mSetStep.setAccessible(true);
mSetStep.invoke(cpsThread, execution);
// Finally. Start running the step
execution.start();

AssertThrows not throwing exception when going through Java Streams

So I'm writing unit tests in which I'm testing capability to blacklist and unblacklist users (which is a feature in my code that is itself working fine).
Here's a sample command that works as expected:
assertThrows(ExecutionException.class, () -> onlineStore.lookup("533"));
If I blacklist user "533", and then run the above command, it works fine, because an ExecutionException is raised (because you're trying to lookup a user who is blacklisted). Similarly, if I had NOT blacklisted user "533" but still ran the above command, the test would fail, which is expected too for similar reason (i.e. no exception is now thrown as you're NOT fetching a blacklisted user).
However if I have a List of user IDs called userIds (which user "533" is now part of) and I blacklist them all (funtionality which I know is working fine), and then run the command below:
userIds.stream().map(id -> assertDoesNotThrow(() -> onlineStore.lookup(id)));
... the test passes, even through it should have FAILED. Why ? Because all users are now blacklisted, so when fetching these users, ExecutionExceptions should have been thrown ..
If I now, replace the streams command above with either of the following, they work as expected:
assertThrows(ExecutionException.class, () -> onlineStore.lookup("533"));
assertDoesNotThrow(() -> onlineStore.lookup("533"));
So this all leads me to believe that for some reason, when going through Java Streams, thrown ExecutionExceptions aren't getting caught.
Any explanation for this behavior ?
You're not calling any terminal operation on the stream, so your assertion is never executed.
You're abusing map(), which is supposed to create a new stream by transforming every element. What you actually want to do is to execute a method which has a side effect on every element. That's what forEach is for (and it's also a terminal operation which actually consumes the stream):
userIds.stream().forEach(id -> assertDoesNotThrow(() -> onlineStore.lookup(id)));

Pentaho SDK, how to define a text file input

I'm trying to define a Pentaho Kettle (ktr) transformation via code. I would like to add to the transformation a Text File Input Step: http://wiki.pentaho.com/display/EAI/Text+File+Input.
I don't know how to do this (note that I want to achieve the result in a custom Java application, not using the standard Spoon GUI). I think I should use the TextFileInputMeta class, but when I try to define the filename the trasformation doesn't work anymore (it seems empty in Spoon).
This is the code I'm using. I think the third line has something wrong:
PluginRegistry registry = PluginRegistry.getInstance();
TextFileInputMeta fileInMeta = new TextFileInputMeta();
fileInMeta.setFileName(new String[] {myFileName});
String fileInPluginId = registry.getPluginId(StepPluginType.class, fileInMeta);
StepMeta fileInStepMeta = new StepMeta(fileInPluginId, myStepName, fileInMeta);
fileInStepMeta.setDraw(true);
fileInStepMeta.setLocation(100, 200);
transAWMMeta.addStep(fileInStepMeta);
To run a transformation programmatically, you should do the following:
Initialise Kettle
Prepare a TransMeta object
Prepare your steps
Don't forget about Meta and Data objects!
Add them to TransMeta
Create Trans and run it
By default, each transformation germinates a thread per step, so use trans.waitUntilFinished() to force your thread to wait until execution completes
Pick execution's results if necessary
Use this test as example: https://github.com/pentaho/pentaho-kettle/blob/master/test/org/pentaho/di/trans/steps/textfileinput/TextFileInputTests.java
Also, I would recommend you create the transformation manually and to load it from file, if it is acceptable for your circumstances. This will help to avoid lots of boilerplate code. It is quite easy to run transformations in this case, see an example here: https://github.com/pentaho/pentaho-kettle/blob/master/test/org/pentaho/di/TestUtilities.java#L346

ProgramCallDocument connecting to AS400 from Groovy Hangs

This question is specifically related to the JT400 class ProgramCallDocument, with it's method callProgram(String ProgramName)
I've tried wapping the call in a try/catch - but it's not throwing an exception, the debugger goes into the callProgram method, and just sits there indefinitely.
A small amount of specific information about the API is available here:
http://publib.boulder.ibm.com/infocenter/iadthelp/v7r0/index.jsp?topic=/com.ibm.etools.iseries.toolbox.doc/rzahhxpcmlusing.htm
Here's the code that I'm running:
AS400 as400System = AS400Factory.getAS400System()
ProgramCallDocument programCallDocument = new ProgramCallDocument(as400System, "com.sample.xpcml.Sample.xpcml")
programCallDocument.setStringValue("sampleProgramName.value", sampleValue)
Boolean didProgramCallDocumentRunSuccessfullyOnTheAS400 = programCallDocument.callProgram("sampleProgramName")
The last line of that snippet is the one that just sits there. I left out the try/catch for brevity.
The XPCML file that the ProgramCallDocument constructor uses is just a proprietary XML format that IBM uses for specifying the parameter lengths and types for a program call. I can come back and add it in if it would be helpful, but the ProgramCallDocument constructor runs validation on the XML, and it didn't come up with any validation errors. I'm not familiar with JT400, or how it does Program Calls, so any assistance would be wonderful.
As a further note, doing some more digging on a related issue today I also found this SO post:
Monitor and handle MSGW messages on a job on an IBM i-series (AS/400) from Java
I think it's relevant to this question, because it's about ways to trap MSGW status on the Java/Groovy side.
It's very likely the called program went into a MSGW status (error).
Check WRKACTJOB JOB(QZRCSRVS) to find the program call job and see the status as well as review the job log.
It may be easier to call a native program using the CommandCall class or as a JDBC stored procedure.
Here's an example of the CommandCall usage in Groovy:
sys = AS400Factory.AS400System
cmd = new CommandCall(sys)
if (!cmd.run "CALL MYLIB.MYPGM PARM('${sampleValue}')") {
println cmd.messageList
}

Retrieving Jobs and/or Process

Is there an easy way to retrieve a job and check e.g. the status with Play?
I have a few encoding jobs/downloading jobs which run for a long time. In some cases I want to cancel them.
Is there a way to retrieve a list of Jobs or something?
E.g. one Job calls the FFMPEG encoder using the ProcessBuilder. I would like to be able to get this job and kill the process if it is not required (e.g. wrong file uploaded and don't want to wait for an hour before it is finished). If I can get a handle to that Job then I can get to the process as well.
I am using Play 1.2.4
See JobsPlugin.java to see how to list all the scheduledJobs.
Getting the task currently executed is more tricky but you can find your jobs in JobsPlugin.scheduledJobs list by checking Job class and call a method in your custom Job to tell him to cancel
Something like
for (Job<?> job : JobsPlugin.scheduledJobs) {
if (job instanceof MyJob) {
((MyJob) job).cancelWork();
}
}
where cancelWork is your custom method

Categories

Resources