We have a serenity framework, where by the screenshot and movie on failures are recording during the execution and is part of it and happens sequentially
i.e
Test Case Step1 - Pass
Test Case Step2 - Fail
Movie and Screenshot of failure for Step2
Test Case Step3 - Pass
The movie and the screenshot are uploaded to the FTP server which can cause slowness and sometimes it hangs if the network is slow
My question is:
Where is the best place to save these screenshot and movie? Will Netapp or box.com solve this? I believe not as they too depend on the network speed
OR
Can we use threads i.e the execution continues and we create another thread which handles screenshot creation or movie and does the upload, without impacting the current execution. But, i am not sure if serentity supports this
OR
Save the files in the local project directory and then upload them after the execution?
The execution results are very important and they need to be backed up
Serenity BDD does not record movies, only screenshots; these take very little time if you configure them to only be take on failures, and these are taken and processed in a separate background thread (so saving them doesn't slow down the tests, though the WebDriver calls to retrieve the screenshot data will slow down the tests when screenshots are taken for every action). It doesn't currently support uploading the files anywhere though.
I presume the movies you refer to are done within your own code, so Serenity would have no control over these.
Screenshots are automatically recorded alongside the other test outcomes. The simplest approach would be to upload/sync them after the build has completed, or to simply store them on your CI server (which is what most shops seem to do).
Related
I am looking for help for automation logic for below scenario.
I have to create a application(say something like creating a user) in my web app, after creation, through search we can find and open the application with the unique number. Then there will be a File Upload button will be enabled for that application to upload any files. But the catch here is, for the file upload button to get enable, it has to go through couple of workflows inside the system. So approximately it takes 3 to 5 minutes to get the File Upload button to get enabled for that single application created. I am looking for how to automate this case in Cucumber-Selenium-Java.
Because, even though we script, the selenium will not wait for 3-5 minutes for the file upload button to get enable. We manually also cannot do that which is not good practice , which will slow the execution, as I have to create approximately 30 applications, which in case the workflow will be taking each application one by one in sequence to get the File Upload button to get enabled. Because that is how the development design has been made.
So any help or any logic suggestions on how can we automate these type of cases?
The scenario is as follow:
I want to pause the test when it encounters the Button in the Wiki page Test Scenario. It should wait until the user presses the Button and once the button is pressed the test should continue.
As the automated tests are designed to run in a full set without any monitoring or midway user interaction, this is not a standard feature. Feel free to edit the source where needed and recompile.
Since you tagged this question with Selenium-fitnesse-bridge, my assumption is that you are testing the browser user interface of an application via Selenium webDriver, but instead of driving the tests from xUnit you are driving from fitnesse.
First, this isn't really the sweet spot of fitnesse - it's main purpose is to test business logic by interacting with System Under Test as opposed to running end-end tests by driving a browser - however, that soap box aside, you are creating fixtures for fitnesse to interact with - and those fixtures currently contain webdriver code. So you can put the pause inside your fixture class. I'd need to see your test table and whether you are using Slim or not to get an idea of where the logical place in your fixture code to place the wait would be.
The only problem with that solution is if you want to specify on the fixture page that there should be a wait at a certain point - you don't just want it behind the scenes in the webdriver code. In that case, you could probably use a ScriptTable style of fixture (http://www.fitnesse.org/FitNesse.UserGuide.WritingAcceptanceTests.SliM.ScriptTable) and have a command in the script that maps to a method that waits for the specified amount of time or for a specified element to be visible.
(I'm sorry for the title, it was the best I could come up with)
I have a PlayFramework (2.3) app where my users can upload large csv files that will be processed.
Once the CSV is imported, I run a task that will pass over each new entry, and check a specific data with a request to an external API (for each entry).
Since this takes a long time, I do the checking asynchronously, but I'm facing a structural issue here :
User A upload a file with 100k lines
I add those 100k lines to my async code and start it
User B upload a file with 200k lines
I would add those new lines to the current async code
I stop the app (updating the code)
When restarting, it should start where it stopped.
I thought about a Queue system, but I would loose the interest when starting the app.
Any idea on how I can do this ?
Thank you for your help.
Since the question is structural, I am only going to focus on the high level implementation details :
I stop the app (updating the code)
When you decide to stop the application, you need a way to gradually stop all running threads. For this, every thread that you start in your application should be registered with some sort of a thread manager. This way, when you decide to stop the application by clicking on a stop button/bringing down the app server, your thread manager knows what threads are running and will give these threads a chance to save their state or finish their work without allowing any new threads to be spawned before finally bringing down the main thread itself.
When restarting, it should start where it stopped
To start from where you stopped, you need to save the state of the completed work somewhere. Assuming that you are using a Queue based system, you will have to serialize your queue before the app stops. This way, you won't lose the contents of the queue when you bring back the app.
I am working in with Java in intelliJ and have a testSuite that I would eventually like to be able to automatically run, export test results to file, and email those results to my boss. The test runs and the e-mail sends with the attachment. I just can't seem to figure out if there is a certain method that I can implement to do such a thing.
You can achieve this by installing a continuous integration server, which will monitor your version control system, run the tests every time you commit code, and send the notifications according to your configuration (for example, by sending an email to your boss if that's what you need).
Popular continuous integration servers include Jenkins and TeamCity.
We have developed selenium webdriver script with junit+java using eclipse on window 7. All the scripts are working as expected now we are using this script for load testing using Jmeter. However, while running script system open multiple browser (200) based on user thread and it create system to hang, is there any way to handle this or we can run script without opening browser. I have come across xvfb tool, but not able to get java api for this tool to plugin in eclipse.
We have also tried using HtmlUnitDriver but as it does not support javascript hence the test is getting failed, also we tried HtmlUnit and found same thing.
Note: that we have writen webdriver script to maintain display item of element (autocomplete, image) on screen.
It would be great if anyone can help or provide more inputs on this...
Firstly, do not integrate selenium scripts with JMeter for load testing! It's not a good approach to follow due to the obvious consequences that you have mentioned in your post. I followed a similar apporach in the beginning when I was new to JMeter and selenium but suffered a great deal when it came to running load tests that spawned too many browser instances which killed the OS.
You can go for HtmlUnitDriver or any headless browser testing tools out there with JMeter, but still, they will be running the browser internally in the memory. Moreover if your application is heavily using Javascript, it won't help.
So I would suggest that you record a browsing session with JMeter Proxy and modify the script (set of requests) according to your needs and play those requests alone, with number of threads.
From a higher level, you should be doing this:
Add a JMeter test plan, listeners, thread group and setup JMeter proxy and record a browsing session where you enter something into the autocomplete textbox and you get certain results.
Stop your proxy and take a look at all the requests that come under your thread group.
As far as I know, when it comes to autocomplete plugins, multiple
requests are sent everytime you enter a letter into the textbox. For
example, for the word 'stackoverflow':
Request1: q=s Request2: q=st Request3: q=sta and so on
Here you can simulate this effect by including words such that all
words have the same length which in turn will let you have same
number of requests to be sent to the server.
So in your test plan, you will pass one word per Jmeter thread. You
can pass the words to a request, from a csv file by using jmeter
parametrization.
This will be a much memory efficient way of load testing instead of using selenium with JMeter. I had asked a similar question. You can check out the responses.