I created a java wrapper to feed jmeter. I have implemented java classes with selenium that are invoked by the wrapper and perform GUI tests.
I activated the headless option.
launching tests with a single user from jmeter all works correctly.
trying to launch two users tests fail.
can you help me to understand why?
Most probably you missed an important bit: each Selenium session need to have a separate URL and Selenium server needs to be running on a different port. So make sure to amend your "wrapper" to be aware of multiple WebDriver instances and to kick off a separate instance of Selenium server (or standalone client) for each JMeter thread (virtual user).
Unfortunately we cannot help further without seeing your code, just keep in mind that your wrapper needs to be thread-safe. Also pay attention to jmeter.log file - normally it should contain enough information to get to the bottom of your test failure.
P.S. Are you aware of WebDriver Sampler plugin? It's designed in line with JMeter threads model and you should be able to kick off as many browsers as your machine can handle. If you for some reason it doesn't fit your needs you can at least take a look into the source code to get an idea with regards to what you need to change in your "wrapper"
Related
I am building a test framework that should support test execution on Android, iOS and Web. Since the app is complex with many test cases (1000+) one of the requirements is to be able to execute in parallel (here should be included tests that require only single device), also having tests with parent-child behavior (chat application - the SMS send from device 'A' to device "B"). I am planning to use POM with PageFactory.
So here my initial thoughts on how to do this:
In before suite to install the application on android/ios using CLI
In testng.xml provide device related parameters such as udid, platform, os etc...
In beforeMethod in BaseTest that extends all tests - to initialize driver (android, ios or web (chrome,ff...))
AfterMethod to quit the driver (also I can have before/after methods in test level)
Pros:
easy to manage driver instance creation before each test,
manageable sequential (parent-child) test execution (in testng.xml I can specify parent and child methods)
easy manageable parallel execution.
Cons:
create an instance of the driver before each tests method will be time consuming,
if one test perform sign in, and the next test needs to send SMS, due to driver initialization in beforeMethod I need to perform sign in (code duplication) again.
Could you suggest or point to a framework that can support all mentioned requirements.
Should I use the selenium grid?
Any help is highly appreciated :)
Based on your description, you can check AppiumTestDistribution framework and extend it for your needs.
I suggest checking official docs and this project is a good point to start
One note from myself: start driver only once in BeforeSuite: installing the app and appium helpers apps is time consuming; you can simply reset app state in BeforeTest, e.g. by starting activity
I'm currently writing a Java program that is an interface to another server. The majority of the functions (close to >90%) do something on the server. Currently, I'm just writing simple classes that run some actions on the server, and then check it myself, or add methods to the test that read back the written information.
Currently, I'm developing on my own computer, and have a version of the server running locally on a VM.
I don't want to continually run the tests at every build, as I don't want to keep modifying the server I am connected too. I am not sure the best way to go about my testing. I have my JUnit tests (on simple functions that do not interact externally) that run every build. I can't seem to see the set way in JUnit to write tests that do not have to run at every build (perhaps when their functions change?).
Or, can anyone point me in the correct direction of how to best handle my testing.
Thanks!
I don't want to continually run the tests at every build, as I don't want to keep modifying the server I am connected to
This should have raised the alarms for you. Running the tests is what gives you feedback on whether you broke stuff. Not running them means you're blind. It does not mean that everything is fine.
There are several approaches, depending on how much access you have to the server code.
Full Access
If you're writing the server yourself, or you have access to the code, then you can create a test-kit for the server - A modified version of the server that runs completely in-memory and allows you to control how the server responds, so you can simulate different scenarios.
This kind of test-kit is created by separating the logic parts of the server from its surroundings, and then mocking them or creating in-memory versions of them (such as databases, queues, file-systems, etc.). This allows the server to run very quickly and it can then be created and destroyed within the test itself.
Limited/No Access
If you have to write tests for integration with a server that's out of your control, such as a 3rd party API, then the approach is to write a "mock" of the remote service, and a contract test to check that the mock still behaves the same way as the real thing. I usually put those in a different build, and run that occasionally just to know that my mock server hasn't diverged from the real server.
Once you have your mock server, you can write an adapter layer for it, covered by integration tests. The rest of your code will only use the adapter, and therefore can be tested using plain unit tests.
The second approach can, of course, be employed when you have full access as well, but usually writing the test-kit is better, since those kinds of tests tend to be duplicated across projects and teams, and then when the server changes a whole bunch of people need to fix their tests, whereas if the test-kit is written as part of the server code, it only has to be altered in one place.
Context:
Currently I'm working with a Selenium based system in Java that runs tests using JUnit and Maven. Through the Maven Surefire plugin, I'm able to run tests in parallel. I have ensured the following things -
ThreadLocal is used for singleton objects
Separate WebDriver per thread
Using explicit waits (e.g. (new WebDriverWait(webdriver, timeout)).until(ExpectedConditions.________(_____)); )
Problem:
However, when running tests in parallel, I'm getting TimeoutExceptions at WebDriverWait explicit waits. This can occur at any places in the test that use explicit waits. These timeout exceptions do not occur when the tests are running sequentially.
Question:
I would like to know whether any of you have encountered this situation and how you go about solving this problem. Other relevant information and feedback are welcomed too.
Thanks in advance! If you need any supplemented resources such as sample code, I'm happy to provide.
Firstly I am not sure how to properly use multi-thread with JUnit, last time I tried I had no success, anyway, I have had better results with TestNG. Other that that, things are similar to yours, basically from maven (surefire) I am calling testng.xml, reference.
Now, webdriver, out-of-the-box, is not thread-safe. Threads can get mixed-up and all kind of "near-to-impossible-to-debug" stuff can happen. Anyway, lately WebDriver people have tried to tackle this problem and we now have ThreadGuard class available (source). That, according to docs:
Multithreaded client code should use this to assert that it accesses
webdriver in a thread-safe manner.
So in your case you can simply use it like (from top of my head, sorry for typos):
ThreadLocal<WebDriver> driverStore = new ThreadLocal<>();
WebDriver driver = ThreadGuard.protect(new FirefoxDriver());
driverStore.set(driver);
I have had success using this setup.
I have developed a micro-framework in Java which does the following function:
All test cases list will be in a MS-Access database along with test data for the Application to be tested
I have created multiple classes and each having multiple methods with-in them. Each of these methods represent a test-case.
My framework will read the list of test cases marked for execution from Access and dynamically decide which class/method to execute based on reflection.
The framework has methods for sendkeys, click and all other generic methods. It takes care of reporting in Excel.
All this works fine without any issue.
Now I am looking to run the test cases across multiple machines using Grid. I read in many sites that we need a framework like TestNG to have this in Grid. But I hope that it could be possible to integrate Grid in my own framework. I have read many articles and e-books which does not explain the coding logic for this.
I will be using only windows 7 with IE. I don't need cross browser/os testing.
I can make any changes to the framework to accomplish this. So please feel free.
In the Access DB which I mentioned above, I will have details about test case and the machine in which the test case should run. Currently users can select the test cases they want to run locally in the Access DB and run it.
How will my methods(test scripts) know which machine its going to be executed? What kind of code changes I should do apart from using RemoteWebDriver and Capabilities?
Please let me know if you need any more information on my code or have any question. Aslo kindly correct me if any of my understanding on Grid is wrong.
How will my methods know which machine it is going to be executed? - You just need to know one machine with a grid setup - the ip of your hub machine. The hub machine will decide where to send the request to from the nodes that are registered with, depending upon the capabilities you specify while instantiating the driver. When you initialize the RemoteWebDriver instance, you need to specify the host (ip of your hub). I would suggest to keep the hub ip as a configurable property.
The real use of the grid is for parallel remote execution. So how do you make your tests run in parallel is a thing that you need to decide. You can use a framework like Testng which provides parallelism with simple settings. You might need to restructure your tests to accomodate testng. The other option would be to implement multithreading yourself to trigger your tests in parallel. I would recommend testng based on my experience since it provides many more capabilities apart from parallelism. You need to take care that each instance of driver is specific to your thread and not a global variable.
All tests can hit the hub and the hub can take care of the rest.
It is important to remember that Grid does not execute your tests in parallel for you. It is the job of your framework to divide tests across multiple threads and collate the results . It is also key to realise that when running on Grid, the test script still executes in the machine the test was started on. Grid provides a REST API to open and interact with browsers, so your test will be using this rather than opening a browser locally. Any other non-selenium code will be executed within the context of the original machine not machine where the browser has been opened (e.g. File System access is not where the browser has opened). Any use of static classes and globals in your framework may also cause issues as each test will acces these concurrently. Your code must be thread safe.
Hopefully this hasn't put you off using Grid. It is an awesome tool and really easy to use. It is the parallel execute which is hard and frameworks such as TestNG provide this out of the box.
Good luck with your framework.
Similar to how do i pass program arguments in java for my Fitnesse Fixture?
.. I wish to kick off my Fitnesse tests in parallel using fitnesseMain.FitNesseMain.launchFitNesse(Arguments arguments)... and pass thread-safe objects to each test to be accessed later by test code run by Fitnesse.
The test code itself is plain old Java, invoked from Fitnesse using givwenzen. The Java test code goes on to dynamically kick off Selenium tests.
I need to pass these thread-safe objects through Fitnesse all the way to the Java test scripts so that they start a Selenium RemoteWebDriver with the correct org.openqa.selenium.remote.DesiredCapabilities.
I have tried using the good old java.lang.ThreadLocal, but it appears that Fitnesse is spawning threads of its own to run the tests which effectively eliminates this option.
Considering Givwenzen is written using Slim, I don't think what you want to do is possible. If possible, it certainly isn't easy, as Slim works by running the tests in a separate process.
So when you run FitNesse, it creates the web server and the wiki. That runs as one java task. When you click on the Test or Suite buttons (or use the URLs) it creates a new Java process that is the SlimServer. Then the FitNesse server sends instructions to the SlimServer as strings and then the SlimServer processes those into instructions to run tests. So the coupling between the code that is launched via FitNesseMain to the slim tests is actually sort of loose. This is done on purpose, as it lets the SlimServer implementation be language independent.
Within the SlimServer, there is the ability to work with actual object references, and that might be OK, but I have doubts that the chain of custody will be threadsafe at each step.
Sorry. Maybe someone else will have an idea for how to work around the issues I've described.