Parametrized JBehave tests - java

I have a story with parameters:
Given save in the <fileName> the data from <sqlQuery>
Then...
Examples:
fileName |sqlQuery
file.txt |query1
I run my test on particular environment with maven -Denvironment=DEV.
Now I would like to run this test on UAT using -Denvironment=UAT but the problem is that the sqlQuery is different then. How to indicate in the java code that if -Denvironment=DEV then use query1 but if -Denvironment=UAT then use query2 using JBEHAVE stories?
Does anyone cen help me with that?

In my opinion the easiest and clanest way is to provide different parameters for each environment directly in the story/scenario,
and pick a proper parameter in the java code depending on the environment.
We are using this method for 3 test environments: DEV, UAT, PRE and it work for us very well.
When the story failed then you do not neet to dig into logs or implementation to find which value of the parameter was used, everything is visible in JBehave report.Also changing parameters is easier, the tester just changes the story, he does not need to look into the implemetation in code.
Given save in the <fileName> the data from the query:
- DEV:<DevSqlQuery> UAT:<UatSqlQuert> PREPROD:<PreSqlQuery>
Then...
Examples:
|fileName |DevSqlQuery|UatSqlQuery|PreSqlQuery|
|file.txt |query1 |query2 |query3 |

Related

How do you log all the compiler options being used by Gradle?

I'm using Gradle 7.4.2. I'm trying to see what it uses for generatedSourceOutputDirectory (among other CompileOptions.
I've tried:
tasks.compileJava {
println(options.generatedSourceOutputDirectory.toString())
}
but this prints an unhelpful:
task ':lib:compileJava' property 'options.generatedSourceOutputDirectory'
Sleuthing around the code itself on Github, I see that's its defaults are (seemingly) mangaged via XML code here.
How can I see what the current compile options are?
The option generatedSourceOutputDirectory is of type DirectoryProperty; therefore, to get the value that it holds, you need to call get().

ONNX with custom ops from TensorFlow in Java

in order to make use of Machine Learning in Java, I'm trying to train a model in TensorFlow, save it as ONNX file and then use the file for inference in Java. While this works fine with simple models, it's getting more complicated using pre-processing layers, as they seem to depend on custom operators.
https://www.tensorflow.org/tutorials/keras/text_classification
As an example, this Colab deals with text classification and uses an TextVectorization layer this way:
#tf.keras.utils.register_keras_serializable()
def custom_standardization2(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />',' ')
return tf.strings.regex_replace(stripped_html, '[%s]' % re.escape(string.punctuation), '')
vectorize_layer = layers.TextVectorization(
standardize=custom_standardization2,
max_tokens=max_features,
output_mode='int',
output_sequence_length=sequence_length
)
It is used as pre-processing layer in the compiled model:
export_model = tf.keras.Sequential([
vectorize_layer,
model,
layers.Activation('sigmoid')
])
export_model.compile(loss=losses.BinaryCrossentropy(from_logits=False), optimizer="adam", metrics=['accuracy'])
In order to create the ONNX file I save the model as protobuf and then convert it to ONNX:
export_model.save("saved_model")
python -m tf2onnx.convert --saved-model saved_model --output saved_model.onnx --extra_opset ai.onnx.contrib:1 --opset 11
Using onnxruntime-extensions it is now possible to register the custom ops and to run the model in Python for inference.
import onnxruntime
from onnxruntime import InferenceSession
from onnxruntime_extensions import get_library_path
so = onnxruntime.SessionOptions()
so.register_custom_ops_library(get_library_path())
session = InferenceSession('saved_model.onnx', so)
res = session.run(None, { 'text_vectorization_2_input': example_new })
This raises the question if it's possible to use the same model in Java in a similar way. Onnxruntime for Java does have a SessionOptions#registerCustomOpLibrary function, so I thought of something like this:
OrtEnvironment env = OrtEnvironment.getEnvironment();
OrtSession.SessionOptions options = new OrtSession.SessionOptions();
options.registerCustomOpLibrary(""); // reference the library
OrtSession session = env.createSession("...", options);
Does anyone have an idea if the use case described is feasable or how to use models with pre-processing layers in Java (without using TensorFlow Java)?
UPDATE:
Spotted a potential solution. If I understand the comments in this GitHub Issue correctly, one possibility is to build the ONNXRuntime Extensions package from source (see this explanation) and reference the generated library file by calling registerCustomOpLibrary in the ONNX Runtime Library for Java. However, as I have no experience with tools like cmake this might become a challenge for me.
The solution you propose in your update is correct, you need to compile the ONNX Runtime extension package from source to get the dll/so/dylib, and then you can load that into ONNX Runtime in Java using the session options. The Python whl doesn't distribute the binary in a format that can be loaded outside of Python, so compiling from source is the only option. I wrote the ONNX Runtime Java API, so if this approach fails open an issue on Github and we'll fix it.

Cannot get api hostname via System property in Java

Recently got the code to write bdd tests with cucumber on Java. There is already maven project with couple of tests and test framework. I need to continue writing bdd tests using this framework.
I am writing API tests and try to run them and i get the error. I found where it fails to run further but I want to figure out what's the idea of doing so in the code. Let me share some code:
So the test framework is collecting info about the API host name this way:
public class AnyClass {
private static final String API_HOSTNAME = "hostname";
private static String getAPIHostName() {
String apiHostName = System.getProperty(API_HOSTNAME);
...
}
When i leave it as is, and run the test, i get the error that host name is empty.
Can you advise on what might be expected to have under System property key "hostname"?
p.s. I tried to use http://localhost and http://127.0.0.1, where my api is located instead of assigning system property but it cannot find such host name.
Can you advise on what might be expected to have under System property key "hostname"?
Yes, I needed to run tests in command line with the syntax like:
mvn clean verify -Dhostname=http://127.0.0.1:8080

imageJ plugin argument

Hello I am trying to pass arguments to my ImageJ PlugIn. However it seems no matter what I pass, argument string will be considered as empty by the program. I couldn't find any documentation on the internet about THAT issue.
My Java plugIn looks like this, and compiles fine.
import ij.plugin.PlugIn;
public class Test implements PlugIn {
public void run(String args) {
IJ.log("Starting plugin Test");
IJ.log("args: ." + args + ".");
}
}
I compile, make a .jar file and put it into the ImageJ plugins folder.
I can call it with the ImageJ userInterface (Plugin>Segmentation>Test) and the macro recorder will put the command used:
run("Test");
Then my code is executed, the log window pops-up as expected:
Starting plugin Test
args: ..
I can manually run the same command in a .ijm file, and get the same result.
However, when I run the following macro command:
run("Test", "my_string");
I get the same results in the log window:
Starting plugin Test
args: .. // <- I would like to get "my_string" passed there
Where it should have displayed (at least what I expect it to do)
Starting plugin Test
args: .my_string.
So my question is: how can I pass parameters to PlugIn and especially how to access them?
Many thanks
EDIT
Hey I found a way to bypass that:
Using the Macro.getOptions() : this method will retrieve the string passed in argument to the plugin.
However, I still can't find a way to pass more than 1 string argument. I tried overloading the PlugIn.run() method but it doesn't work at all.
My quick fix is to put all my arguments in 1 string, and separating them by a space. Then I split this string:
String [] arguments = Macro.getOptions().split(" ");
I don't see a more convenient way to get around that. I can't believe how stupid this situation is.
Please, if you have a better solution, feel free to share! Thanks
You are confusing the run(String arg) method in ij.plugin.Plugin with the ImageJ macro command run("command"\[, "options"\]), which calls IJ.run(String command, String options).
In the documentation for ij.plugin.Plugin#run(String arg), it says:
This method is called when the plugin is loaded. 'arg', which may be blank, is the argument specified for this plugin in IJ_Props.txt.
So, arg is an optional argument that you can use in IJ_Props.txt or in the plugins.config file of your plugin to assign different menu commands to different functions of your plugin (see also the excellent documentation on the Fiji wiki).
To make use of the options parameter when running your plugin from macro code, you should use a GenericDialog to get the options, or (as you apparently learned the hard way) use the helper function Macro.getOptions().

Can we check the queue depth using a scripting language?

Is it feasible to check queue depth(MQ) using any scripts? [No restrictions on the language]. The plan is to look at non-Java solutions.
I do understand that it is achievable in Java using MQQueueManager but that would need the usage of client API. Hence checking for any alternate options or better practices.
InquireQueue at http://www.capitalware.biz/mq_code_perl_python.html looks similar[but looks a bit outdated]
Didn't Google give you a recent blog posting I wrote called "How to Clear a MQ Queue from a Script or Program" at http://www.capitalware.biz/rl_blog/?p=1616
Just change the MQSC "clear" command to "current depth" (CURDEPTH).
i.e.
DIS QL(TEST.*) CURDEPTH
Does nobody use google anymore ?
PyMQI, an open-source Python extension for WebSphere MQ
http://metacpan.org/pod/MQSeries::Queue
my %qattr = $queue->Inquire( qw(MaxMsgLength MaxQDepth) );
The perl mqseries is very complete. Below is some sample code. (Part of the credit for the sample probably goes to someone else, but it has been floating around my drive for years.) The code connects to the queue manager specified by the command line, if not supplied, it will connect to the default queue manager. It then inquires about the queue name passed in, specifically, the current depth of that queue. This is displayed to the user. This code can easily be modified to display other queue properties. Furthermore, MQINQ can be used to inquire about the attributes of other objects, not just queues.Here is the subset sample code:
use MQSeries;
my $quename = $ARGV[0];
my $quemgrname = $ARGV[1];
my $Hconn = MQCONN($qmgrname, $CompCode, $Reason);
print"MQCONN reason:$Reason\n";
my $ObjDesc = { ObjectType => MQOT_Q, ObjectName => $qname };
my $Options = MQOO_INQUIRE | MQOO_SET | MQOO_FAIL_IF_QUIESCING;
my $Hobj = MQOPEN($Hconn,$ObjDesc,$Options,$CompCode,$Reason);
print"MQOPEN reason:$Reason\n";
my $tst = MQINQ($Hconn,$Hobj,$CompCode,$Reason,MQIA_CURRENT_Q_DEPTH);
print"Depth of $qname is: $tst\n";
MQCLOSE($Hconn,$Hobj,$COptions,$CompCode,$Reason);
print"MQCLOSE reason:$Reason\n";
MQDISC($Hconn,$CompCode,$Reason);
print"MQDISC reason:$Reason\n";
If you are logged in using MQM user on linux and want to have a quick check on queues with messages in them .. here is a quickfix ..
echo "dis ql(*) CURDEPTH" | runmqsc <QMGRNAME> | grep -v '(0' | grep -v 'AMQ'
this will give you a command line output and you can schedule the same command in crontab if needed directly ( without having to save a script for it )
I know its not neat but may be the quickest of solutions.
There's the many JVM based scripting/ish languages that give you access to Java classes. Some need a thin glue layer, some need nothing at all.
Groovy
Jython
Scala
Clojure
etc.

Categories

Resources