I am very new to this arena.
I am trying to get the test coverage of my automation test cases written under a completely different repo for the tests using Jacoco.
I want to know if it is possible at first place ? And if it is how to achieve it?
There is separate repo used by the developers for the application source code.
How is it possible to get the test coverage when both source code and tests are in different repos
The unit tests coverage is received by developers.
How can testers get the coverage for their integration tests?
Are you using a CI/CD tool like Jenkins? In that case, you can schedule different builds for different branches in that if you have admin access to the tool.
Edited after seeing the request of John.
Usually, companies will have DevOps admin and other stakeholders of the project who monitor what is happening in each branch. There will be a branching strategy for each Product team. You need to periodically merge contents from developer branch to test branch so that the jacoco test coverage reports don't look confusing to your Dev team members. When, how and what is merged will be decided by the stakeholders and it depends on a lot of facts starting right from the software development process.
If you are following Scrum methodology for software development, at the end of each sprint, developers would give a demo on testable new features or enhancements. Testing team will create test cases based on what is delivered. All these happens in the Sprint review/retrospective/demo meetings.
If you need more information on Jenkins and configuring multiple jobs on them, you need to look at a separate stackexchange forum dedicated to devops. I believe this should be a good place to start for you.
Related
So this is my situation:
I am fairly new to gitlab-ci. I don't host my own gitlab instance but rather push everything to gitab itself. I am not using and am not familiar with any build tools like Maven. I usually work and run my programms from an IDE rather than the terminal.
This is my problem:
When I push my Java project I want my pipeline to start the Junit tests I wrote. Whereas I've found various simple commands for other languages than Java to run unit tests I didn't come across anything for Junit. I've just found people using Maven, running the test locally and then pushing the test reports to gitlab. Is it even possible to easily run Junit tests on the gitlab server with the pipeline without build tools like Maven? Do I have to run them locally? Do I have to learn to start them with a Java terminal command? I've beeen searching for days now.
The documentation is clear:
To enable the Unit test reports in merge requests, you need to add artifacts:reports:junit in .gitlab-ci.yml, and specify the path(s) of the generated test reports.
The reports must be .xml files, otherwise GitLab returns an Error 500.
You then have various example in Ruby, Gio, Java (Gradle or Maven), and other languages.
But with GitLab 13.12 (May 2021), this gets better:
Failed test screenshots in test report
GitLab makes it easy for teams to set up end-to-end testing with automation tools like Selenium that capture screenshots of failed tests as artifacts.
This is great until you have to sort through a huge archive of screenshots looking for the specific one you need to debug a failing test.
Eventually, you may give up due to frustration and just re-run the test locally to try and figure out the source of the issue instead of wasting more time.
Now, you can link directly to the captured screenshot from the details screen in the Unit Test report on the pipeline page.
This lets you quickly review the captured screenshot alongside the stack trace to identify what failed as fast as possible.
See Documentation and Issue.
My question is also related to who does what in typical BDD. My understanding, Product owner comes up with User Story (may or may not in Gherkin), QA writes Scenarios for End-to-End testing(in feature files), Dev writes his code (how and where, does he follow BDD as well?). At this point if the Dev writes the automated Unit Testing, whether this can be leveraged by the QA for End-to-End or they can be absolutely different?
My question is how the Dev and QA leverage each other's work in terms of coding while following BDD. I am not sure how to connect the dots.
Lets take the example of a JAVA based application and QA is already using Cucumber with Selenium Webdriver for automated testing.
If you are practicing BDD, then you would create the specs first (define the behaviour) and only then implement this behaviour (i.e. write the production code). On which level you define the behaviour is less relevant, although at the unit test level most people would call this "TDD" (even though it's not necessarily test driven as much as the "test" is the design for the code you want to write). The developer and QA would collaborate on defining the behaviour and implementing the tests and production code. Ideally, I'd expect different tests at different levels, the final (highest) level being E2E tests. I would also make sure not to retest everything on every level, but to only test the things that make sense at that level. For instance: a method that calculates a value should be unit tested, how that value is displayed in the front end would be tested in the front end (can still be a unit test), how to get the value from the backend would be an integration test, etc.
You might be interested in reading more about BDD either here: https://docs.cucumber.io/bdd/, in any of the related blogposts here: https://docs.cucumber.io/community/blog-posts/ or in The Cucumber Book / The Cucumber for Java Book.
I am working in a project applying BDD. When BA creates a ticket and writes down all scenarios, it is assigned to a developer. Meanwhile QA also create a QA ticket to work relevant to that DEV ticket.
But QA will start writing automation test only when the code of DEV ticket is in review or Already done. It is because the feature need to be available for testing.
when QA starts coding, all unit tests for that ticket should be done.
So to leverage the work of DEV and QA, we proposed a solution. Although it is in pilot, not official applied.
QA needs to involve in unit test review. What it means is s/he needs to look at all unit tests and give a comment if s/he think there are some more cases need to be added or removed. And also QA can get the test coverage in unit test and decide to write automation tests according to that coverage.
Here the QA need to actively involve and decide what to test in e2e.
It would easier if you can discuss face to face with the developer to get the unit test coverage but I think it is more objective to review the code. Also f2f , not any DEV willing to tell a QA what his work.
However, this solution requires more skills in a QA engineer. Not any QA can read and understand the DEV code.
This is the idea that our QA team given in current project, I don’t know if any project apply this.
This is really good question. I also want to hear more opinions/ideas from other people who also wish to leverage the work of QA and DEV.
How about BDD/TDD peer programming between Dev and QA individuals resulting in producing e2e testing automation.
This could entail
E2e services and app deployment automation (one off - ideally to work on any engineer laptop)
Use case setup
set up behavior application state (update data/db schema in datastore/database, config files if feature uses key switch)
decide on clear definition for behavior input and output
define feature trigger and validation rule
implement application logic
verify validation rule with implemented logic
I may sound like a lot of thinks to do at once, thus lot of folks would be discourage implementing e2e, I guess with the right tool set the process can be super easy to implement.
I have a cucumber test runner class in which i write my test suite to run like below
#CucumberOptions( features={"Feature_Files/featues"
} ,glue={ "com.automation.stepdef"
} ,monochrome=true ,dryRun= false ,plugin = {"html:target/cucumber-html-report"
} ,tags = {"#Startup"
}
)
If I wish to customize this tag option on successful completion of #startup feature, is it possible ?
The most common way of running two or more dependant test suites is creation of triggers for two or more jobs in your CI. This can be done with various plugins as described here.
Otherwise, if that's some test preparation actions you can use #Before or realted JUnit #BeforeClass annotation.
Seems not possible with current Cucumber. What you are asking for is the dependency among test scenarios, which IMO is a very good feature. For example, we have some login feature and some other functional features. It would not make any sense and would actually be a waste of time to run other features if the login feature does not work in the first place. To make things worse, you will see a lot of failures in test report in which you could not easily spot the root cause which is non-working login feature.
TestNG supports "dependsOnMethod" feature. However, TestNG is not a BDD tool.
QAF https://qmetry.github.io/qaf/qaf-2.1.7b/scenario.html#meta-data supports this as a BDD tool. However, it would be too heavy to introduce a new tool for such a simple feature.
All we need is some addition to Cucumber syntax and a customized test runner to build up the scenarios execution order as per dependencies and skip the features if the feature they depends on fails.
I would love to see if someone can put some effort into this :)
BTW, CI could workaround this issue, but again it's too heavy and clumsy. Imagine you have multi-dependencies among test scenarios, how many CI pipelines will you need then? Also, you can not workaround this in local dev env with CI because simply you would not set CI locally.
I'm going to execute some Selenium tests after each Bamboo build. As far as I see, the best way is to store them in a separate repo and use specific project (or stage in an existing one) to run these tests. But there is an issue, I can't figure out. I'm using deployment plans to deliver product after build to development environment, so I'd like my tests to be executed, only if the deployment was successful. Does anybody know how to properly express this in Bamboo triggers' terms? Thanks you.
It's rather a confusing and complicated process. As we all know selenium needs a live website to point to in order to execute the tests. There are several ways to accomplish this using Bamboo. I assume you already have the build pipeline set up for automatic deployment. Depending on what you want and how you deploy several agents can be used to execute the tests. Another way is to use Selenium Grid. You want to trigger the selenium task after the deployment happen using several slaves. A grid creates the Hub and Slaves relationship and tells the hub to execute the tests accordingly. Here is some info about the plug-in that can be used to trigger Selenium Testng tests. And, of course, as yo said, you want to create the selenium task to be dependent on the deployment so if the deployment fails then the test will not run. Hope this help!
I am at the stage now where I have a fairly good understanding of programming/development, using Java.
Could anyone tell me the best way for me to start using testing packages? I have looked at Hibernate but not surer where to go with it...
I use Eclipse 3.5 on Mac OS X. Is it a case of writing scripts to test methods? What is unit-testing? etc.
Where do I begin?
Many thanks. Alex
What is Unit Testing
Unit testing is writing code (i.e. test code) that passes known inputs into code under test and then validating the code under test returns expected outputs. It's the most granular testing you can perform on an application. To make it easier, usually a unit testing framework is used. For Java, JUnit is the most popular, but TestNG is also notable.
Getting Started
Unit testing frameworks provide tools for test execution, validation and results reporting. For your setup, Eclipse has built in support for JUnit. Eclipse is able to automatically detect tests, compile tests and code under test, execute tests, and report results within the IDE. Furthermore, failures are reported as clickable stack trace information that loads the corresponding file at the given line number.
Mock Objects
That you're also working with Hibernate, suggests you also investigate a mock object framework as well - such as jMock. Mock objects are usually substituted as part of a code under tests's composition and serve two purposes: (1) returning known outputs and (2) recording they've been called and how so that unit tests can introspect that information as part of validation.
The ability to use Mock objects to make testing easier is predicated on dependency injection. That is other entities that compose the object under test. The idea is decoupling dependencies (e.g. Hibernate) to focus on testing algorithms that manipulate that data you're working with.
Database
However, if you've got code that is not easily refactored, or perhaps you want to validate database code, you can also test Hibernate interaction as well. In that case you want a database in a known state. Three approaches come to mind:
Restoring a database backup at the beginning of each test execution.
Use dbunit, which provides its own mechanisms for maintain state.
Transactional locking with rollback. Wrap the entire case is wrapped with a try{} finally{}, where the latter always rolls back the transaction.
James Shore ("a thought leader in the Agile software development community") has a series of screen casts of him demonstrating Test Driven Development, using Eclipse.
http://jamesshore.com/Blog/Lets-Play/
While there are many ways to start testing, there is no "best" way so there's no point in looking for that as a starting point.
Search the web for a good tutorial on junit and do it. That will be the absolute best way to get started IMO. Don't get sidetracked with code coverage or integrating with Hudson or any of the other tasks that are on the periphery to testing. Focus on writing a handful (or 10) if tests first.
Once you understand the basics you can start looking at other tools to see if they meet your needs any better or worse than junit.
First up: Hibernate is not a testing package.
Now that's out of the way, I'd suggest you take a look at JUnit. Read up on unit testing first so you know what it is (the Wikipedia entry is a good place to start), then try the JUnit cookbook. Write some unit tests for a small piece of your code to see how it works, then move on to bigger chunks.
While you are at it, take a look at other development tools like Cobertura (for finding out how good your test coverage is) and static analysis tools like Findbugs and Checkstyle. These all integrate nicely with Ant and probably Eclipse, too.
If you are interested in improving your coding standards and build systems then I highly recommend using Ant, JUnit, Cobertura, Checkstyle and Findbugs together with a continuous integration server (e.g. Hudson or CruiseControl) and a version control system (e.g. git). With a toolkit like that you can't go wrong.
There are other frameworks out there (TestNG, Mockito etc) so take a look at them, too, and decide which you prefer (EDIT: And which work nicely together. Mockito + JUnit is a good combination.)