As said in the documentation we can continue build if one of the tasks has failed. But I can't get the point of that feature... Why do we need to execute other task if one of the tasks has been failed? Is it safe at all? Couldn't you provide an example?
Yep it makes sense, for example generating classes from wsdl, in case of service not available.
Then you should provide some logic in your application for not working this service.
The second sentence in your linked doc says:
This allows the build to complete sooner, but hides other failures that would have occurred. In order to discover as many failures as possible in a single build execution, you can use the --continue option.
So instead of failing on the first error just go and fail on all. Imagine a webform that only tells you each error at a time after submitting and it takes you ages to fill it out versus a form that shows you all you current of your current errors at once.
Examples are obviously include developing the original gradle file and testing it with your build. Or running on an integration service, where you would rather have "all" the errors at once instead of hitting the build button all the day.
If a task fails, any subsequent tasks that were depending on it will not be executed, as it is not safe to do so.
So you will most likely not end up with the result you are expecting. But could! That is on you to decide, as it depends on your build and what you are doing. So, 'it safe? Heck no. But sometimes we all have to do unsafe things...
If you want to get rid of something failing, that is not vital to the actual build result (e.g. the jar file you are after) but is part of the build process (e.g. a codenarc task as part of the tests) and you would rather fix a critical bug with ugly code, you might be better off to just exclude that task (gradle jar -x codenarc) instead of using this feature. Is this safe? Heck no... you get the picture!
Related
I want to execute testcases in below order.
Example: execution starts with 3rd scenario of 2nd feature then 4th scenario of 6th feature and so on.
Can we do this customization using TestNG/cucumber options/java or any other tool?
Is it possible using hooks or cli options --order ?
You could probably achieve this with use of #tags above the above the line that has the Scenario keyword. Then execute multiple test runs but using the tag options as parameters passed to the build as mentioned here.
However, this approach does seem a little odd. If the reasons for doing this are to force an order due to tests leaving data cached in the running application that require state from the other one then this is not good practice. Each Scenario should be completely runnable in isolation and without any prior one that might set up state.
I successfully use Cucumber to process my Java-based tests.
Sometimes these tests encounter regression issues, and it takes time to fix found issues (depending on issue priority, it can be weeks or even months). So, I'm looking for a way to mark some cucumber tests as known issues. Don't want these tests fail the entire set of tests, just want to mark them, for example, as pending with yellow color in report instead.
I know that I can specify a #tag for failed tests and exclude them from execution list, but that's not what I want to do as I still need these tests to be run continuously. Once the issue fixed, appropriate test should be green again without any additional tag manipulation.
Some other frameworks provide such functionality (run the test but ignore its result in case of fails). Is it possible to do the same trick somehow using Cucumber?
The final solution I use now - to mark known issues with specific tag, exclude these tests from regular round and to run them separately. But that's not the best solution I believe.
Any ideas appreciated. Thanks in advance.
I would consider throwing a pending exception in the step that causes the known failure. This would allow the step to be executed and not be forgotten.
I would also consider rewriting the failing steps in such a way that when the failure occurs, it is caught and a pending exception is thrown instead of the actual failure. This would mean that when the issue is fixed and the reason for throwing the pending exception is gone, you have a passing suite.
Another thing I would work hard for is not to allow a problem to be old. Problems are like kids, when they grow up the they get harder and harder to fix. Fixing a problem while it is young, perhaps a few minutes, is usually easy. Fixing problems that are months old are harder.
You shouldn't.
My opinion is that if you have tests that fail then you should add a bug/task ticket for these scenarios and to add them in a build status page with the related tag.
Another thing you could do is to add the ticket number as a tag and removed after the ticked is fixed.
If you have scenarios that fail due a bug, then the report should show that, if the scenario is not fully implemented then is better not to run it at all.
One thing you could do is to add a specific tag/name for those scenarios and to try in a before scenario method to get the tags and check for added specific tag/name and throw a pending exception.
What i suggest is to keep those scenarios running if there is a bug and to document that in the status page.
I think the client would understand better if those scenarios are red because they are failing rather than some yellow color with "grey" status of what is happening.
If you need the status of the run to trigger some CI job then maybe is better to change the condition there.
As i see it, the thing you need to give it a thought should be: what is the difference between yellow or red, pending or fail for you or the client?, you would wand to keep a clear difference and to keep track of the real status.
You should address these issues in an email, discuss them with the project team and with the QA team, and after a final decision is taken you should get a feedback from the customer also.
I have a Java project with Gradle as build environment.
Simplifying my scenario:
I have a module called Utils and one of its interfaces is TimeProvider which is implemented by either a real time provider (System.currentTimeMillis()) or a logic time provider (incrementing a counter). Another example is instead of sleeping for some time, I can just increment the timestamp. The goal is to run tests independently of the current time. An example use case is to be able to stop at break points without worrying aobut the time to progress and disrupting my tests.
(BTW if you have a betteer way to do do, I would appreciate any insight).
My question is how to force all the parts in the code to use this infrastructure and not the real Java time solutions (Thread.sleep, System.currentTimeMillis, etc.)
I want to prevent a scenario where a programmer accidently writes a code that uses the real Java time instead of my infrastructure.
One option is to use a Security Manager but this is tricky because the error will be catched "some time" during run time.
I prefer a way to catch this during compile time.
I wonder if there is a way, maybe in Gradle to forbid some modules use part of Java code? Maybe there is another way to do so?
Thanks,
I work at a software company where our primary development language is Java. Naturally, we use Hudson for continuous builds, which it works brilliantly for. However, Hudson is not so good at some of the other things we ask it to do. We also use Hudson jobs to deploy binaries, refresh databases, run load testing, run regressions, etc. We really run into trouble when there are build dependencies (i.e. load testings requires DB refresh).
Here's the one thing that Hudson doesn't do that we really need:
Build dependency: It supports build dependencies for Ant builds, but not for Hudson jobs. We're using the URL invocation feature to cause a Hudson job to invoke another Hudson job. The problem is that Hudson always returns a 200 and does not block until the job is done. This means that the calling job doesn't know a) if the build failed and b) if it didn't fail, how long it took.
It would be nice to not have to use shell scripting to specify the behavior of a build, but that's not totally necessary.
Any direction would be nice. Perhaps we're not using Hudson the right way (i.e. should all builds be Ant builds?) or perhaps we need another product for our one-click deployment, load testing, migration, DB refresh, etc.
Edit:
To clarify, we have parameters in our builds that can cause different dependencies depending on the parameters. I.e. sometimes we want load testing with a DB refresh, sometimes without a DB refresh. Unfortunately, creating a Hudson job for each combination of parameters (as the Join plugin requires) won't work because sometimes the different combinations could lead to dozens of jobs.
I don't think I understand your "build dependency" requirements. Any Hudson job can be configured to trigger another (downstream) job, or be triggered by another (upstream) job.
The Downstream-Ext plugin and Join plugin allow for more complex definition of build dependencies.
There is a CLI for Hudson which allows you to issue commands to a Hudson instance. Use "help" to get precise details. I believe there is one which allows you to invoke a build and await its finish.
http://wiki.hudson-ci.org/display/HUDSON/Hudson+CLI
Do you need an extra job for your 'dependencies'?
Your dependencies sound for me like an extra build step. The script that refreshes the DB can be stored in your scm and every build that needs this step will check it out. You can invoke that script if your parameter "db refresh" is true. This can be done with more than just one of your modules. What is the advantage? Your script logic is in your scm (It's always good to have a history of the changes). You still have the ability to update the script once for all your test jobs (since hey all check out the same script). In addition you don't need to look at several scripts to find out whether your test ran successful or not. Especially if you have one job that is part of several execution lines, it becomes difficult to find out what job triggered which run. Another advantage is that you have less jobs on your Hudson and therefore it is easier to maintain.
I think what you are looking for is http://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin This plugin lets you execute other jobs based on the status of previous jobs. You can even call a shell script from the downstream project to determine any additional conditions. which can in turn call the API for more info.
For example we have a post-build step to notify us, this calls back the JSON API to build a nice topic in our IRC channel that says "All builds ok" or "X,Y failed" , etc.
I am continuing the development of a serialization layer generator. The user enters a description of types (currently in XSD or in WSDL), and the software produces code in a certain target language (currently, Java and ansi C89) which is able to represent the types described and which is also able to serialize (turn into a byte-sequence) and deserialize these values.
Since generating code is tricky (I mean, writing code is hard. Writing code that writes code is writing code to do a hard thing, which is a whole new land of hardness :) ). Thus, in the project which preceded my master thesis, we decided that we want some system tests in place.
These system tests know a type and a number of pairs of values and byte sequences. In order to execute a system test in a certain language, the type is run through the syste, resulting in code as described above. This code is then linked with some handwritten host-code, which is capable of reading these pairs of a byte sequence and a value and functions to read values of the given value from a string. The resulting executable is then run and the byte-value-pairs are fed into this executable and it is overall checked if all such bindings result in the output "Y". If this is the case, then these example values for the type serialize into the previously defined byte sequence and we can conclude that the generated code compiles and runs correctly, and thus, overall, that the part of the system handling this type is correct. This is a very good thing.
However, right now I am a bit unhappy with the current implementation. Currently, I have written a custom junit runner which uses quite a lot of reflection sorcery in order to read these byte-value-bindings from a classes attributes. Also, the overall stack to generate the code requires a lot of boilerplate code and boilerplate classes which do little more than to contain two or three strings. Even worse, it is quite hard to get a good integration with all tools which base on Junits descriptions and which generate test failure reports. It is quite hard to actually debug what is happening if the helpful maven Junit testrunner or the eclipse test runner gobble up whatever errors the compiler threw, just because the format of this error is different from junits own assertion errors.
Even worse, a single failed test in the generated code causes the maven build to fail. This is very annoying. I like it if the maven build fails if a certain test of a different unit fails, because (for example), if a certain depth first preorder calculation fails for some reason, everything will go haywire. However, if I just want to show someone some generated code for a type I know working, then it is very annoying if I cannot quickly build my application because the type I am working on right now is not finished.
So, given this background, how can I get a nice automated system which checks these generation specifications? Possibilities I have considererd:
A Junit integrated solution appears to be less than ideal, unless I can improve the integration of maven and junit and junit with my runner and everything else.
We used fitnesse earlier, but overall ditched it, because it caused more problems than it solved. The major issues we had were integration into maven and hudson.
A solution using texttest. I am not entirely convinced, because this mostly wants an executable, strings to put on stdin and strings to expect on stdout. Adding the whole "run application, link with host code and THEN run the generated executable" seems kinda complicated.
Writing my own solution. This will of course work and do what I want. However, this will be the most time consuming task, as usual.
So... do you see another possible way to do this while avoiding to write something own?
You can run Maven with -Dmaven.test.skip=true. Netbeans has a way to set this automatically unless you explicitly hit one of the commands to test the project, I don't know about Eclipse.