I'm running automation tests using cucumber-junit project and i've roughly around 200 scenario's in my project.. now the problem is, it's hard to find unused step definitions in my project as we constantly need to update features.. Is there any solution to detect step definitions that is no longer useful.. Any help much appreciated!!
Since cucumber-jvm 4.4.0 it is possible to use cucumber-jvm built in plugin - unused.
for cucumber junit runner it could look like this:
#RunWith(Cucumber.class)
#CucumberOptions(
plugin = {
"json:build/report/cucumber.json",
"unused:build/report/usage.txt"} //mind this plugin
glue = "stepdefs",
features = "features"
)
public class CucumberRunner {
}
After run unused stepdefs should be found in build/report/usage.txt
Original pull request: https://github.com/cucumber/cucumber-jvm/pull/1648
There can be a case where a single step can be a part of different scenarios and will have single method only in step definition. So, It is easy to map the feature file's step with the corresponding method while executing the feature file using 'cucumber feature' pulg-in.
However, It is literally not possible to cross validate the same from step definition file to identify the single step in number of feature files.
Perhaps, The only possible way out is to design your application in modular way:
1) with feature files and corresponding step definition files specific to a specific feature/module.
2) Keep generic methods in a generic parent step definition file.
Thus, Designing the application in a modular way can easily lead you to identify the unused methods that can be removed from the step definitions.
IntelliJ's cucumber plugin can search for usages of a step definition. It will not give you all unused ones in one go, but at least you can check individual usages one by one. The plugin is also available in the Community Edition of IDEA.
Related
I arrived to a legacy project where multiple files are developed in Java and many others in Kotlin. I have be able to configure Pitest to execute the mutation test and i have a correct report.
Now I would like to execute the mutation test only over the Kotlin files.
I tried to use the <targetClasses> but the param expresion is able to include certain packages, but I didn't discover a way to include certain types of files only.
I also tried to use the <excludedClasses> to add there a Java identificator that exclude this type of files, but again it doesn't work.
Do you know a way to use the targetClasses or the excludedClasses to let the kotlin files only in the scope of the Pitest execution?
Thank you in advance.
There is no built in way to limit mutation to only kotlin files. You would need to implement an mutation interceptor.
https://pitest.org/quickstart/advanced/
Or use the exclusions functionality provided by the arcmutate extentions to ignore files with a .java extension.
https://docs.arcmutate.com/docs/exclusions.html
I have a Cucumber scenario whose steps are defined in multiple Step files, as opposed to only having only one. If I decide to run the test using Intellij I go to run/debug configurations menu and the form provides a field named glue which enables me to specify the steps package.
So far I was able to run the scenarios that have all steps defined in the same Steps file, but I was unable to figure out how to do it for the scenarios that require multiple steps files located in different packages. I've tried a csv approach but without success. Does anyone know what I am missing? Thank you for your help.
There are few ways to configure the glue paths with Cucumber.
As a cucumber.properties file in the root package (usually src/test/resources/cucumber.properties):
cucumber.glue=com.example.steps1,com.example.steps2
Via the command line
--glue com.example.steps1 --glue com.example.steps2
Or with the #CucumberOptions annotation.
#RunWith(Cucumber.class)
#CucumberOptions(glue ={"com.example.steps1", "com.example.steps2"})
public class RunCucumberTest {
}
When using IDEA you have to separate the glue packages with a new line or space (not a comma!).
com.example.steps1
com.example.steps2
And if you are on a recent version of Cucumber (6+) you don't have to provide the glue at all. Cucumber will search the class path by default.
M.P. Korstanje's answer provides very useful tips but not for this specific case. What worked for me was:
Specify the parent package as opposed to several distinct sub-packages (e.g. use only com.example instead of both com.example.steps1, com.example.steps2) in the Glue field
Select the correct module in the Use classpath of module field.
Motivation:
In our code we have a few places where some methods are run by their name. There are some big if-else-if blocks with each function name and call of the corresponding method (I use the term function to describe just names, for example function X01 might correspond to method SomeClass.functionX01). I've been looking into ways to improve that
Goal:
Write just methods that are annotated with some custom annotation, removing the need to update or even include if-else-if blocks in order to run specific function. Have access to any generated code if any code is generated.
What I did:
I've created first prove of concept using runtime annotations and it proved successful, but slower then if-else-if. Next attempt was with source annotation
I've followed this link for an example, however it did not seam to run in IntelliJ. What I wanted is to have - in this case PersonBuilder class generated, instead there was none. In some cases an error was raised Error:java: Bad service configuration file, or exception thrown while constructing Processor object: javax.annotation.processing.Processor: Provider BuilderProcessor not found
After some Googling and failing to find anything I've turned to book (Core Java, Volume II - Advanced Features - 9th Edition, Polish translation) and there was reccomended to run the following commands:
javac [AbstractProcessor implementation]
javac -processor [Compiled Processor] [other source files to compile]
This worked, however is unsatisfactory as it needs to happen inside IDE (NetBeans and IntelliJ to be specific) automatically during build. Code does not need to be generated on the fly, but programmer must have access to it after build (as in - be able to call methods of generated classes)
Question:
How to have and use generated code used in NetBeans and IntelliJ without the need of using external tools? Is it possible, or using reflection, runtime annotations or external tools is the only way?
Additional info (just in case):
Language level: Java 1.8
JVM versions: 12 and 13
IDEs: NetBeans and IntelliJ
We have a Java program that relies on a specific library. We have created a second library that has a very similar API to the first library, however, this one is made in-house and we are ready to begin testing it.
To test, we would like to replace the jar in the Java program with the jar of our new library. The issue is that the new library does not have the exact same namespace, so the import statements will not align. For example,
Java program
import someLibrary.x.y.Foo;
public class Main {
public static void main(String[] args){
new Foo().bar();
}
}
New Library has the same API but different namespace
anotherLibrary.x.y.Foo;
Question: How can I use the classloader or another tool to run a Java program but replace a dependency and redirect import statements to another namespace?
[EDIT] - We do not have access to the Java program's source code. We can have this program changed to use our new library but we do not want to do that until after it has been thoroughly tested.
The only solution I can think of would involve writing a custom ClassLoader that would alter the bytecode to change the method references and field references to change the class name.
How about the straightforward solution:
Create a branch of your main program (in git or whatever source control tool you use):
Apply all the changes required to work with the new library (change all the imports)
Deploy on test environment and test extensively
Merge back to master when you feel confident enough
Another solution could be:
Create a branch out of new library
Change the imports so that it will look exactly as the old one (with all the packages)
Substitute the old library with a new one in your application
Deploy on test environment and test extensively
When you're ready with the new library deploy to production and keep working in production for a grace period of month or something (until you really feel confident)
In a month change back all the imports (basically move from branch with the "old" imports to the branch with your real imports in both library and application.
Update
Its also possible to relocate packages of your version of the library automatically if you use maven.
Maven shade plugin has relocate goal that can be used to "relocate" the packages of your library to be just like packages of existing library. See shade plugin's documentation
The Gradle User Guide often mentions that Gradle is declarative and uses build-by-convention. What does this mean?
From what I understand it means that, for example, in java plugin there are conventions like
source must be in src/main/java,tests must be in src/main/test, resources in src/main/resources, ready jars in build/libs and so on. However, Gradle does not oblige you to use these conventions and you can change them if you want.
But with the first concept, I have a bigger problem with understanding. Like SQL you say what you want to do with your queries but do not say how the Database System will get them, which algorithm to use to extract the data etc.
Please, tell me more to understand these concepts properly. Thanks.
Your understanding of build by convention is correct, so I don't have to add anything there. (Also see Jeff's answer.)
The idea behind declarative is that you don't have to work on the task level, implementing/declaring/configuring all tasks and their dependencies yourself, but can work on a higher, more declarative level. You just say "this is a Java project" (apply plugin: "java"), "here is my binary repository" (repositories { ... }), "here are my sources" (sourceSets { ... }), "these are my dependencies" (dependencies { ... }). Based on this declarative information, Gradle will then figure out which tasks are required, what their dependencies are, and how they need to be configured.
In order to understand a declarative style of programming it is useful to compare and contrast it against an imperative programming style.
Declarative Programming allows us to specify what we want to get done.
In Imperative Programming we specify how we get something done.
So when we use gradle,as Peter describes, we make declarations, declaration such as, "This is a Java Project" or "This is a Java Web Application"
Gradle then, makes use of plugins that offer the service of handling the building of things like "Java Projects" or "Web Applications". This is nice because it is the Gradle Plugin that contains the implementation details that concerns itself with such tasks as compiling java classes and building war files.
Contrast this against another build system, Make, which is more imperative in nature. Lets take a look at a simple Make rule from taken from here:
foo.o : foo.c defs.h
cc -c -g foo.c
So here, we see a rule that describes how to build an object file foo.o from a C source file and a C header file.
The Make rule does two things.
The first line says that a foo.o file depends on a foo.c and foo.h. This line is kind of declarative in so far as Make knows how to check the timestamp on the file foo.o to see if it is older than the files foo.c and foo.h. and if foo.o is older then Make will invoke the command that follows on the next line.
The next line is the imperative one.
The second line specifies exactly what command to run (cc - a C compiler) when a foo.o file is older than either of the files foo.c or foo.h. Note also that the person who is writing the Makefile rule must know what flags that are passed to the cc command.
Build by convention is the idea that if you follow the default conventions, then your builds will be much simpler. So while you can change the source directory, you don't need to explicitly specify the source directory. Gradle comes with logical defaults. This is also called convention over configuration.
This part edited to be more clear about declarative nature based on Peter's answer:
The idea of the build being declarative is that you don't need to specify every step that needs to be done. You don't say "do step 1, do step 2, etc". You define the plugins (or tasks) that need to be applied and gradle then builds a task execution graph and figures out what order to execute things in.