Cucumber 4.2: Separate Runner for each browser - java

I had implemented Cucumber 4.2 parallel execution for chrome browser only.Now, I want to implement parallel execution for two browsers (Firefox/Chrome). Please provide an example or skeleton so that i can improve from it. Besides, where to search for Cucumber API javadoc?
Chrome Runner:
public class ChromeTestNGParallel {
#Test
public void execute() {
//Main.main(new String[]{"--threads", "4", "-p", "timeline:target/cucumber-parallel-report", "-g", "com.peterwkc.step_definitions", "src/main/features"});
String [] argv = new String[]{"--threads", "8", "-p", "timeline:target/cucumber-parallel-report", "-g", "com.peterwkc.step_definitions", "src/main/features"};
ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader();
byte exitstatus = Main.run(argv, contextClassLoader);
}
}
Firefox Runner:
public class FirefoxTestNGParallel {
#Test
public void execute() {
//Main.main(new String[]{"--threads", "4", "-p", "timeline:target/cucumber-parallel-report", "-g", "com.peterwkc.step_definitions", "src/main/features"});
String [] argv = new String[]{"--threads", "8", "-p", "timeline:target/cucumber-parallel-report", "-g", "com.peterwkc.step_definitions", "src/main/features"};
ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader();
byte exitstatus = Main.run(argv, contextClassLoader);
}
}
This is what I want.

I think you can do this outside Cucumber.
The first part is to configure Cucumber to run with a particular browser using either a command line parameter or the environment.
The second part is to run two (or more cucumber instances at the same time). Basically use virtual machines to do this just run cucumber with different command line parameters to configure the browser.
You could even use a paid service like Circle CI to do this for you.

Related

calling a Redis function(loaded Lua script) using Lettuce library

I am using Java, Spring-Boot, Redis 7.0.4, and lettuce 6.2.0.RELEASE.
I wrote a Lua script as below:
#!lua
name = updateRegisterUserJobAndForwardMsg
function updateRegisterUserJobAndForwardMsg (KEYS, ARGV)
local jobsKey = KEYS[1]
local inboxKey = KEYS[2]
local jobRef = KEYS[3]
local jobIdentity = KEYS[4]
local accountsMsg = ARGV[1]
local jobDetail = redis.call('HGET', jobsKey ,jobRef)
local jobObj = cmsgpack.unpack(jobDetail)
local msgSteps = jobObj['steps']
msgSteps[jobIdentity] = 'IN_PROGRESS'
jobDetail = redis.call('HSET', jobsKey, jobRef, cmsgpack.pack(jobObj))
local ssoMsg = redis.call('RPUSH', inboxKey, cmsgpack.pack(accountsMsg))
return jobDetail
end
redis.register_function('updateRegisterUserJobAndForwardMsg', updateRegisterUserJobAndForwardMsg)
Then I registered it as a function in my Redis using the below command:
cat updateJobAndForwardMsgScript.lua | redis-cli -x FUNCTION LOAD REPLACE
Now I can easily call my function using Redis-cli as below:
FCALL updateJobAndForwardMsg 4 key1 key2 key3 key4 arg1
And it will get executed successfully!!
Now I want to call my function using lettuce which is my Redis-client library in my application, but I haven't found anything on the net, and it seems that lettuce does not support Redis 7 new feature for calling FUNCTION using FCALL command!!
Does it have any other customized way for executing Redis commands using lettuce?
Any help would be appreciated!!
After a bit more research about the requirement, I found the following StackOverFlow answer:
StackOverFlow Answer
And also based on the documentation:
Redis Custom Commands :
Custom commands can be dispatched on the one hand using Lua and the
eval() command, on the other side Lettuce 4.x allows you to trigger
own commands. That API is used by Lettuce itself to dispatch commands
and requires some knowledge of how commands are constructed and
dispatched within Lettuce.
Lettuce provides two levels of command dispatching:
Using the synchronous, asynchronous or reactive API wrappers which
invoke commands according to their nature
Using the bare connection to influence the command nature and
synchronization (advanced)
So I could handle my requirements by creating an interface which extends the io.lettuce.core.dynamic.Commands interface as below:
public interface CustomCommands extends Commands {
#Command("FCALL :funcName :keyCnt :jobsKey :inboxRef :jobsRef :jobIdentity :frwrdMsg ")
Object fcall_responseJob(#Param("funcName") byte[] functionName, #Param("keyCnt") Integer keysCount,
#Param("jobsKey") byte[] jobsKey, #Param("inboxRef") byte[] inboxRef,
#Param("jobsRef") byte[] jobsRef, #Param("jobIdentity") byte[] jobIdentity,
#Param("frwrdMsg") byte[] frwrdMsg);
}
Then I could easily call my loaded FUNCTION(which was a Lua script) as below:
private void updateResponseJobAndForwardMsgToSSO(SharedObject message, SharedObject responseMessage) {
try {
ObjectMapper objectMapper = new MessagePackMapper();
RedisCommandFactory factory = new RedisCommandFactory(connection);
CustomCommands commands = factory.getCommands(CustomCommands.class);
Object obj = commands.fcall_responseJob(
Constant.REDIS_RESPONSE_JOB_FUNCTION_NAME.getBytes(StandardCharsets.UTF_8),
Constant.REDIS_RESPONSE_JOB_FUNCTION_KEY_COUNT,
(message.getAgent() + Constant.AGENTS_JOBS_POSTFIX).getBytes(StandardCharsets.UTF_8),
(message.getAgent() + Constant.AGENTS_INBOX_POSTFIX).getBytes(StandardCharsets.UTF_8),
message.getReferenceNumber().getBytes(StandardCharsets.UTF_8),
message.getTyp().getBytes(StandardCharsets.UTF_8),
objectMapper.writeValueAsBytes(responseMessage));
LOG.info(obj.toString());
} catch (Exception e) {
e.printStackTrace();
}
}

Java add Classpaths at runtime

There are many answers to this question in the stackoverflow?
But the most cast the ClassLoader.getSystemClassLoader() to URLClassLoader and this works anymore.
The classes must be found by the systemclassloader.
Is there an another solution?
- without restarting the jar
- without creating a own classloader (In this case I must replace the systemclassloader with my own)
The missing classes/jars must be added at the moment only on startup and I didn't want to add these in the manifest with "Classpath".
I found the Java Agent with the premain-Method. This can also work great, but in this case I want to start the premain method without calling "java -javaagent:... -jar ..."
Currently I restart my programm at the beginning with the missing classpaths:
public class LibLoader {
protected static List<File> files = new LinkedList<>();
public static void add(File file) {
files.add(file);
}
public static boolean containsLibraries() {
RuntimeMXBean runtimeMxBean = ManagementFactory.getRuntimeMXBean();
String[] classpaths = runtimeMxBean.getClassPath().split(System.getProperty("path.separator"));
List<File> classpathfiles = new LinkedList<>();
for(String string : classpaths) classpathfiles.add(new File(string));
for(File file : files) {
if(!classpathfiles.contains(file)) return false;
}
return true;
}
public static String getNewClassPaths() {
StringBuilder builder = new StringBuilder();
RuntimeMXBean runtimeMxBean = ManagementFactory.getRuntimeMXBean();
builder.append(runtimeMxBean.getClassPath());
for(File file : files) {
if(builder.length() > 0) builder.append(System.getProperty("path.separator"));
builder.append(file.getAbsolutePath());
}
return builder.toString();
}
public static boolean restartWithLibrary(Class<?> main, String[] args) throws IOException {
if(containsLibraries()) return false;
List<String> runc = new LinkedList<>();
runc.add(System.getProperty("java.home") + "\\bin\\javaw.exe");
RuntimeMXBean runtimeMxBean = ManagementFactory.getRuntimeMXBean();
List<String> arguments = runtimeMxBean.getInputArguments();
runc.addAll(arguments);
File me = new File(LibLoader.class.getProtectionDomain().getCodeSource().getLocation().getPath());
String classpaths = getNewClassPaths();
if(!classpaths.isEmpty()) {
runc.add("-cp");
runc.add(classpaths);
}
if(me.isFile()) {
runc.add("-jar");
runc.add(me.getAbsolutePath().replace("%20", " "));
} else {
runc.add(main.getName());
}
for(String arg : args) runc.add(arg);
ProcessBuilder processBuilder = new ProcessBuilder(runc);
processBuilder.directory(new File("."));
processBuilder.redirectOutput(Redirect.INHERIT);
processBuilder.redirectError(Redirect.INHERIT);
processBuilder.redirectInput(Redirect.INHERIT);
Process process = processBuilder.start();
try {
process.waitFor();
} catch (InterruptedException e) {
e.printStackTrace();
}
return true;
}
}
Hope someone has a better solution.
Problem is, the classes must be found my the system ClassLoader not by a new ClassLoader.
It sound like your current solution of relaunching the JVM is the only clean way to do it.
The system ClassLoader cannot be changed, and you cannot add extra JARs to it at runtime.
(If you tried to use reflection to mess with the system classloader's data structures, at best it will be non-portable and version dependent. At worst it will be either error prone ... or blocked by the JVM's runtime security mechanisms.)
The solution suggested by Johannes Kuhn in a comment won't work. The java.system.class.loader property is consulted during JVM bootstrap. By the time your application is running, making changes to it will have no effect. I am not convinced that the approach in his Answer would work either.
Here is one possible alternative way to handle this ... if you can work out what the missing JARs are early enough.
Write yourself a Launcher class that does the following:
Save the command line arguments
Find the application JAR file
Extract the Main-Class and Class-Path attributes from the MANIFEST.MF.
Work out what the real classpath should be based on the above ... and other application specific logic.
Create a new URLClassLoader with the correct classpath, and the system classloader as its parent.
Use it to load the main class.
Use reflection to find the main classes main method.
Call it passing the save command line arguments.
This is essentially the approach that Spring Bootstrap and OneJar (and other things) take to handle the "jars in a jar" problem and so on. It avoids launching 2 VMs.

Submit PySpark to Yarn cluster using Java

i need to create a Java program that submit python scripts (that use PySpark) to a Yarn cluster.
Now, i saw that using SparkLauncher is the same as using a YarnClient, because it uses a Yarn Client built-in (writing my own Yarn Client is insane, i tried, too much things to handle).
So i wrote:
public static void main(String[] args) throws Exception {
String SPARK_HOME = System.getProperty("SPARK_HOME");
submit(SPARK_HOME, args);
}
static void submit(String SPARK_HOME, String[] args) throws Exception {
String[] arguments = new String[]{
// application name
"--name",
"SparkPi-Python",
"--class",
"org.apache.spark.deploy.PythonRunner",
"--py-files",
SPARK_HOME + "/python/lib/pyspark.zip,"+ SPARK_HOME +"/python/lib/py4j-0.9-src.zip",
// Python Program
"--primary-py-file",
"/home/lorenzo/script.py",
// number of executors
"--num-executors",
"2",
// driver memory
"--driver-memory",
"512m",
// executor memory
"--executor-memory",
"512m",
// executor cores
"--executor-cores",
"2",
"--queue",
"default",
// argument 1 to my Spark program
"--arg",
null,
};
System.setProperty("SPARK_YARN_MODE", "true");
System.out.println(SPARK_HOME);
SparkLauncher sparkLauncher = new SparkLauncher();
sparkLauncher.setSparkHome("/usr/hdp/current/spark2-client");
sparkLauncher.setAppResource("/home/lorenzo/script.py");
sparkLauncher.setMaster("yarn");
sparkLauncher.setDeployMode("cluster");
sparkLauncher.setVerbose(true);
sparkLauncher.launch().waitFor();
}
When i run this Jar from a machine in the cluster, nothing happen... No Error, No Log, No yarn container... just nothing... if i try to put a println inside this code, obvs, it prints the println.
What i'm misconfiguring?
If i want run this JAR from a different machine, where and how should declare the IP?

Problem in running Test Fragment using JMeter JMX script using Java code

I have JMeter script having many test elements like test fragments, include controllers , beanshell samplers, ssh samplers, SFTP samplers, JDBC etc. When I tried running JMX script using Java code( below) some of the test elements are getting skipped.One of the major problem is it is skipping Test fragments with out going inside another JMX script.We are running Test Fragments using include controllers which we tried all the combinations of paths.Please help to run test fragments inside JMX file using below Java code.
I tried all the paths inside JMX scripts, I added all JMeter Jars in maven repository etc.
public class Test_SM_RS_001_XML extends BaseClass {
public void Test121() throws Exception {
StandardJMeterEngine jmeter = new StandardJMeterEngine();
Summariser summer = null;
JMeterResultCollector results;
File JmxFile1 = new File(/path/to/JMX/File/test121.jmx");
HashTree testPlanTree = SaveService.loadTree(JmxFile1);
testPlanTree.getTree(JmxFile1);
jmeter.configure(testPlanTree);
String summariserName = JMeterUtils.getPropDefault("summariser.name", "TestSummary");
if (summariserName.length() > 0) {
summer = new Summariser(summariserName);
}
results = new JMeterResultCollector(summer);
testPlanTree.add(testPlanTree.getArray()[0], results);
jmeter.runTest();
while (jmeter.isActive())
{
System.out.println("StandardJMeterEngine is Active...");
Thread.sleep(3000);
}
if (results.isFailure())
{
TestAutomationLogger.error("TEST FAILED");
Assert.fail("Response Code: " + JMeterResultCollector.getResponseCode() + "\n" + "Response Message: " + JMeterResultCollector.getResponseMessage() + "\n" + "Response Data: " + JMeterResultCollector.getResponseData());
}
}
}
I expect to run Test fragments inside JMX file ,but it is not considering and Skipping.
Your test code is lacking essential bit: resolving of Module and Include controllers which need to be traversed and added to the "main" HashTree
So you need to replace this line:
testPlanTree.getTree(JmxFile1);
with these:
JMeterTreeModel treeModel = new JMeterTreeModel(new Object());
JMeterTreeNode root = (JMeterTreeNode) treeModel.getRoot();
treeModel.addSubTree(testPlanTree, root);
SearchByClass<ReplaceableController> replaceableControllers =
new SearchByClass<>(ReplaceableController.class);
testPlanTree.traverse(replaceableControllers);
Collection<ReplaceableController> replaceableControllersRes = replaceableControllers.getSearchResults();
for (ReplaceableController replaceableController : replaceableControllersRes) {
replaceableController.resolveReplacementSubTree(root);
}
HashTree clonedTree = JMeter.convertSubTree(testPlanTree, true);
and this one:
jmeter.configure(testPlanTree);
with this one:
jmeter.configure(clonedTree);
More information: Five Ways To Launch a JMeter Test without Using the JMeter GUI

Execute an AWS command in eclipse

I execute an EC2 command through eclipse like:
public static void main(String[] args) throws IOException {
// TODO Auto-generated method stub
String spot = "aws ec2 describe-spot-price-history --instance-types"
+ " m3.medium --product-description \"Linux/UNIX (Amazon VPC)\"";
System.out.println(spot);
Runtime runtime = Runtime.getRuntime();
final Process process = runtime.exec(spot);
//********************
InputStreamReader isr = new InputStreamReader(process.getInputStream());
BufferedReader buff = new BufferedReader (isr);
String line;
while((line = buff.readLine()) != null)
System.out.print(line);
}
The result in eclipse console is:
aws ec2 describe-spot-price-history --instance-types m3.medium --product-description "Linux/UNIX (Amazon VPC)"
{ "SpotPriceHistory": []}
However, when I execute the same command (aws ec2 describe-spot-price-history --instance-types m3.medium --product-description "Linux/UNIX (Amazon VPC)") in shell I obtain a different result.
"Timestamp": "2018-09-07T17:52:48.000Z",
"AvailabilityZone": "us-east-1f",
"InstanceType": "m3.medium",
"ProductDescription": "Linux/UNIX",
"SpotPrice": "0.046700"
},
{
"Timestamp": "2018-09-07T17:52:48.000Z",
"AvailabilityZone": "us-east-1a",
"InstanceType": "m3.medium",
"ProductDescription": "Linux/UNIX",
"SpotPrice": "0.047000"
}
My question is: How can obtain in eclipse console the same result as in shell console ?
It looks like you are not getting the expected output because you are passing a console command through your Java code which is not getting parsed properly, and you are not utilizing the AWS SDKs for Java instead.
To get the expected output in your Eclipse console, you could utilize the DescribeSpotPriceHistory Java SDK API call in your code[1]. An example code snippet for this API call according to the documentation is as follows:
AmazonEC2 client = AmazonEC2ClientBuilder.standard().build();
DescribeSpotPriceHistoryRequest request = new DescribeSpotPriceHistoryRequest().withEndTime(new Date("2014-01-06T08:09:10"))
.withInstanceTypes("m1.xlarge").withProductDescriptions("Linux/UNIX (Amazon VPC)").withStartTime(new Date("2014-01-06T07:08:09"));
DescribeSpotPriceHistoryResult response = client.describeSpotPriceHistory(request);
Also, you could look into this website containing Java file examples of various scenarios utilizing the DescribeSpotPriceHistory API call in Java[2].
For more details about DescribeSpotPriceHistory, kindly refer to the official documentation[3].
References
[1]. https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/ec2/AmazonEC2.html#describeSpotPriceHistory-com.amazonaws.services.ec2.model.DescribeSpotPriceHistoryRequest-
[2]. https://www.programcreek.com/java-api-examples/index.php?api=com.amazonaws.services.ec2.model.SpotPrice
[3]. https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSpotPriceHistory.html

Categories

Resources