I'm trying to figure out how to get cucumber-jvm to produce separate xml report for each feature file. At this moment if I'm running it with mvn I'm getting a single xml, but I wan't a separate file for each.
For each feature file a descendant of org.junit.runners.ParentRunner is being created. You can browse this code here.
Is it possible somehow to tell surefire to create a separate xml for each runner?
More info here: #171
Thank you, guys!
Have you tried to split it with XSLT or other language? Sometimes is better to create an additional task in maven to run after the previous one. If the first XML contains all the data and is possible to split, I would tried it.
Related
I've a Gradle Java project. Currently we're not using Groovy at all so no configuration for Groovy at all. Now I need to provide support for groovy scripts/class/function(s) which can be called from Java code.
To give you context - We need to transform some fields before writing them in a file. We want that user himself can write those transformations and need not to compile code. And we should be able to call those transformations before writing to file. For this we're thinking that we'll be writing Groovy for those transformations...
So if someone can guide me that how to achieve this step by step, Means what to add/subtract in gradle file .. add src/main/groovy package/classes... I'll really appreciate it.
I tried to search on internet but could not find any help...
I have an issue with the NiFi InvokeHTTP processor which requires me to make modifications to it. I am not trying to replace it but to create a fork which I can use alongside the original.
The easiest way I have found to do this is to clone the code, checkout the 1.10 tag and run mvn clean install in the nifi/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors directory.
However, the result of this is a JAR file named "nifi-standard-processors-1.10.0.jar". This contains ALL of the standard processors. Instead of this, I am looking to output each processor individually so I can upload only the modified InvokeHTTP processor to NiFi.
The only thing I can think of is to delete the source for the other processors individually which seems a little long-winded. I have had a look in pom.xml and cannot see anything obvious which would allow me to do this either.
Does anyone know how I can achieve this? Apologies if this is an easy question; I haven't used Java in over a decade and this is my first time using Maven.
Thank you in advance.
Is the code change one you can make by extending the processor rather than changing it at the source? If so, I'd recommend creating a custom processor which extends InvokeHTTP in its own Maven bundle (e.g. nifi-harry-bundle) which depends on nifi-standard-processors. This will allow you to use the functionality already provided, modify what you need to, and then only compile and build the new code, and copy/paste that NAR (NiFi Archive) directly into the NiFi lib/ directory to consume it.
See building custom processors and this presentation for more details.
I have a Java project, which consists of loads of maven modules and a considerable amount of unit/integration tests. The project is configured to create test reports via the surefire plugin. Now this plugin basically creates an XML-report per test class and is scheduled to run once a day and executed on Jenkins.
What I want to do is, send the those XML-reports to a test management system (XRAY) in order to make them more visible and manageable. My (naive) approach would be to just just add a post build script on Jenkins and send those reports via curl to the test managements REST-API. This API offers a way to send a single report file at a time. This report file can either be single or nested, i.e. I can basically send both of the following and it works:
Single report
<testsuite>
...
</testsuite>
Aggregated report
<testsuites>
<testsuite ... />
<testsuite ... />
</testsuites>
The REST-API can handle both, that is the IBM JUnit schema and the standard surefire schema
Now to the problem; I obviously want to combine those reports into one to avoid having to make a billion requests to the REST-API. However I can't seem to find an automated way. What I've tried so far is
play around with the surefire plugin to merge the XML reports, but no appropriate option seems to exist
Organise Tests into a (JUnit) test suite, but the output remains an xml report per test
finding alternative plugins/tools that address this issue, no luck
The only other way I can think of is to write a "merge script" myself, possibly using some sort of XSLT-transformation. But I'd rather not. Any help is much appreciated, thanks!
The solution would be to use an external utility for that as surefire seems to not support it.
I've used successfully junit-merge utility, which is an NPM package, as you can see for example in this tutorial.
The usage is pretty straightforward; you just need to specify the output file and the input folder containing the multiple JUnit XML based reports.
junit-merge -o results.xml -d target/surefire-reports/
Just a question to anyone here who is using ExtentReport as a listener.
Is it possible to use ExtentReport to generate the HTML Report during the tests being executed when its being used as a listener?
For example, instead of the report being generated when the tests are finished, the report is generated after the first test, and so on? I want to use ExtentReport to monitor the progress of my tests aswell and showing the results?
I've learnt its possible when using as a logger, as you can do a flush after each tests. However is it possible while using Extent as a Listener?
Thanks in advanced.
Kind regards,
Colin.
Yes it is possible. Since extent report is open source. One could edit the function which formulate the structure of our final report. But since extent report uses different functionalities, just modifying an endtest would cause a cascading effect on everything.
Or another way is to save the data before flush is called and by calling flussh everytime. One could replace the existing file over and over.
My best bet is too leave it as it is and get other listener jars to do the work for you.
I'm writing a Java application that needs a lot of static data that is stored in many enum types. Since I would like an user-friendly way to customize this data using for example xml or json files but I'm not allowed to do it directly with enums I was looking for a way to elegantly do it.
Maybe a good solution would be to have a separate java program that reads the xml files and produces the java sources that are then compiled with the remaining part of the sources. My doubs is how to automatize this process in a stand alone way (eg ant?) and how to integrate it seamlessly with eclipse so that it is autOmatically done when I'm working with the project.. Does anything similar to what I'm looking already exists? Any suggestion to solve my problem?
Thanks!
If the items and the overall structure are somehow fixed (and what varies most is the values of the attributes), you could consider defining the enum with one entry for each of your items and let the enum populate its own constants with data read from an external source (XML / JSON) -- at load time or on demand.
Create a project whose sole job is to generate java from your sources.
Make sure that the generation phase is done by Ant.
Now, wrap this project into eclipse and use a custom ant builder, that calls the target in your already existing build.xml.
This is a standard part of our dev infrastructure, so this definitely works.
You can write a maven plugin that generates the code. There are some plugins that do that. It won't work automatically, but you can connect it to the standard maven lifecycle so it gets executed just before compile.
I just did something like that recently.
You can have ant seamlessly integrate with eclipse to achive that:
In Eclipse open project properties, go to "Builders", click "New...", select "Ant Builder", select a build file, go to "Targets" tab and click "Set Targets..." for "Auto Build". Select the desired target and you are done. The target will run every time you save a source file (if "Build Automatically" is selected).
Have you considered including the XML files in your jar, and loading them on startup into maps that use the enum as a key?