I'm going to execute some Selenium tests after each Bamboo build. As far as I see, the best way is to store them in a separate repo and use specific project (or stage in an existing one) to run these tests. But there is an issue, I can't figure out. I'm using deployment plans to deliver product after build to development environment, so I'd like my tests to be executed, only if the deployment was successful. Does anybody know how to properly express this in Bamboo triggers' terms? Thanks you.
It's rather a confusing and complicated process. As we all know selenium needs a live website to point to in order to execute the tests. There are several ways to accomplish this using Bamboo. I assume you already have the build pipeline set up for automatic deployment. Depending on what you want and how you deploy several agents can be used to execute the tests. Another way is to use Selenium Grid. You want to trigger the selenium task after the deployment happen using several slaves. A grid creates the Hub and Slaves relationship and tells the hub to execute the tests accordingly. Here is some info about the plug-in that can be used to trigger Selenium Testng tests. And, of course, as yo said, you want to create the selenium task to be dependent on the deployment so if the deployment fails then the test will not run. Hope this help!
Related
I created a project using cucumber to perform e2e tests of various apis I consume. I would like to know if I can run these tests through endpoints to further automate the application that was created.
That way I would be able to upload this app and would not need to keep calling locally.
You can do that if you create a Rest API with a get method which executes the test runner when called.
How to run cucumber feature file from java code not from JUnit Runner
But I don't recommend you to do that since what you are trying to achieve seems to me similar to a pipeline definition.
If you're in touch with the developers of these APIs, you can speak with them about including your test cases in their pipeline, since they probably have one in place.
If, for some reason, you still want to trigger your tests remotely and set it up by your own, I would recommend you to start reading about Jenkins. You can host it on any machine and run your tests from there, accessing from any machine to your jenkins instance:
https://www.softwaretestinghelp.com/cucumber-jenkins-tutorial/
If your code is hosted in any platform like github or gitlab, they already have its own way of creating pipelines and you can use it to run your tests. Read about Gitlab pipelines or Github actions.
So this is my situation:
I am fairly new to gitlab-ci. I don't host my own gitlab instance but rather push everything to gitab itself. I am not using and am not familiar with any build tools like Maven. I usually work and run my programms from an IDE rather than the terminal.
This is my problem:
When I push my Java project I want my pipeline to start the Junit tests I wrote. Whereas I've found various simple commands for other languages than Java to run unit tests I didn't come across anything for Junit. I've just found people using Maven, running the test locally and then pushing the test reports to gitlab. Is it even possible to easily run Junit tests on the gitlab server with the pipeline without build tools like Maven? Do I have to run them locally? Do I have to learn to start them with a Java terminal command? I've beeen searching for days now.
The documentation is clear:
To enable the Unit test reports in merge requests, you need to add artifacts:reports:junit in .gitlab-ci.yml, and specify the path(s) of the generated test reports.
The reports must be .xml files, otherwise GitLab returns an Error 500.
You then have various example in Ruby, Gio, Java (Gradle or Maven), and other languages.
But with GitLab 13.12 (May 2021), this gets better:
Failed test screenshots in test report
GitLab makes it easy for teams to set up end-to-end testing with automation tools like Selenium that capture screenshots of failed tests as artifacts.
This is great until you have to sort through a huge archive of screenshots looking for the specific one you need to debug a failing test.
Eventually, you may give up due to frustration and just re-run the test locally to try and figure out the source of the issue instead of wasting more time.
Now, you can link directly to the captured screenshot from the details screen in the Unit Test report on the pipeline page.
This lets you quickly review the captured screenshot alongside the stack trace to identify what failed as fast as possible.
See Documentation and Issue.
I am very new to this arena.
I am trying to get the test coverage of my automation test cases written under a completely different repo for the tests using Jacoco.
I want to know if it is possible at first place ? And if it is how to achieve it?
There is separate repo used by the developers for the application source code.
How is it possible to get the test coverage when both source code and tests are in different repos
The unit tests coverage is received by developers.
How can testers get the coverage for their integration tests?
Are you using a CI/CD tool like Jenkins? In that case, you can schedule different builds for different branches in that if you have admin access to the tool.
Edited after seeing the request of John.
Usually, companies will have DevOps admin and other stakeholders of the project who monitor what is happening in each branch. There will be a branching strategy for each Product team. You need to periodically merge contents from developer branch to test branch so that the jacoco test coverage reports don't look confusing to your Dev team members. When, how and what is merged will be decided by the stakeholders and it depends on a lot of facts starting right from the software development process.
If you are following Scrum methodology for software development, at the end of each sprint, developers would give a demo on testable new features or enhancements. Testing team will create test cases based on what is delivered. All these happens in the Sprint review/retrospective/demo meetings.
If you need more information on Jenkins and configuring multiple jobs on them, you need to look at a separate stackexchange forum dedicated to devops. I believe this should be a good place to start for you.
I have around 200 testNG test cases can be executed through maven by and suite.xml file. But I want to convert these test cases into web service or any other similar webservice so that anybody can call any test case from there machine and will be able to know whether that particular functionality is working fine at that moment.
But what if no one calls the test webservices for longer time? You won't know the state of your application, if you have any failures/regressions.
Instead, you can use
continuous integration to run the tests automatically on every code push; see Jenkins for a more complete solution; or, more hacky, you can create your own cron job/demon/git hook on a server to run your tests automatically
a maven plugin that displays the results of the last execution of the automated tests; see Surefire for a html report on the state of the last execution of each test
I work at a software company where our primary development language is Java. Naturally, we use Hudson for continuous builds, which it works brilliantly for. However, Hudson is not so good at some of the other things we ask it to do. We also use Hudson jobs to deploy binaries, refresh databases, run load testing, run regressions, etc. We really run into trouble when there are build dependencies (i.e. load testings requires DB refresh).
Here's the one thing that Hudson doesn't do that we really need:
Build dependency: It supports build dependencies for Ant builds, but not for Hudson jobs. We're using the URL invocation feature to cause a Hudson job to invoke another Hudson job. The problem is that Hudson always returns a 200 and does not block until the job is done. This means that the calling job doesn't know a) if the build failed and b) if it didn't fail, how long it took.
It would be nice to not have to use shell scripting to specify the behavior of a build, but that's not totally necessary.
Any direction would be nice. Perhaps we're not using Hudson the right way (i.e. should all builds be Ant builds?) or perhaps we need another product for our one-click deployment, load testing, migration, DB refresh, etc.
Edit:
To clarify, we have parameters in our builds that can cause different dependencies depending on the parameters. I.e. sometimes we want load testing with a DB refresh, sometimes without a DB refresh. Unfortunately, creating a Hudson job for each combination of parameters (as the Join plugin requires) won't work because sometimes the different combinations could lead to dozens of jobs.
I don't think I understand your "build dependency" requirements. Any Hudson job can be configured to trigger another (downstream) job, or be triggered by another (upstream) job.
The Downstream-Ext plugin and Join plugin allow for more complex definition of build dependencies.
There is a CLI for Hudson which allows you to issue commands to a Hudson instance. Use "help" to get precise details. I believe there is one which allows you to invoke a build and await its finish.
http://wiki.hudson-ci.org/display/HUDSON/Hudson+CLI
Do you need an extra job for your 'dependencies'?
Your dependencies sound for me like an extra build step. The script that refreshes the DB can be stored in your scm and every build that needs this step will check it out. You can invoke that script if your parameter "db refresh" is true. This can be done with more than just one of your modules. What is the advantage? Your script logic is in your scm (It's always good to have a history of the changes). You still have the ability to update the script once for all your test jobs (since hey all check out the same script). In addition you don't need to look at several scripts to find out whether your test ran successful or not. Especially if you have one job that is part of several execution lines, it becomes difficult to find out what job triggered which run. Another advantage is that you have less jobs on your Hudson and therefore it is easier to maintain.
I think what you are looking for is http://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin This plugin lets you execute other jobs based on the status of previous jobs. You can even call a shell script from the downstream project to determine any additional conditions. which can in turn call the API for more info.
For example we have a post-build step to notify us, this calls back the JSON API to build a nice topic in our IRC channel that says "All builds ok" or "X,Y failed" , etc.