Passing arguments from Eclipse to TestNG - java

I am looking for a way to do a test method in TestNG such that when I click in a certain file in Eclipse and make Run -> TestNG, it will pass that selected file's full name and the test receives that same file name as argument.
Is this possible to do with TestNG? I know how to make Eclipse send the correct argument, I am not just sure how to make TestNG accept it.
Thanks

This is kind of hacky... but you could try instantiating it from Ant. Rather than write a build.xml just call the testNG task directly and feed it the class/file name. That way all the work is done for you. Also I believe that TestNG requires it's own JVM when it's running, so this will take care of that as well.
Edit: I think I misread your question.

Related

Is it possible to create a top-level test run configuration in Intelij IDEA so that all my JUnit tests run with it?

I've got a bunch of Selenium tests in my project and I'd love running them with IDEA. I need to pass certain VM arguments (where my firefox binary is located etc.) and I don't want to create a run config for every Test class that I have.
There are also too many tests to just run all every time.
So, does anyone know if it's possible to create a "parent" run config which would be used for all tests in a certain path whether I run them together or just a single one?
Now I feel silly :P
Run Configurations has a Defaults tab where you can set default values for JUnit tasks

invoking an ant command via make

I'm trying to do soemthing which may not be done that often. It's part of our test code which uses a wide variety tools: java, ant, make, and xml.
Our java testing tool parses XML. It gets an argument via an XML tag from a file:
<TAG>-Darg1="argument1" -Darg2="argument2"</TAG>
Within the java code, I'm calling the make command. The make command is invoked from java (via ProcessBuilder).
In the makefile, I am calling ant where the -Darg="argument1" -Darg2="argument2" args should be passed.
But it's not working.
Anyways, it seems that the -D from the -Darg1= ... part is not compatible with make, so I'm trying to enclose that in a variable that I can pass through make to ant. Within ant it's taking the variable "RULES_ARG" and treating that as one argument, instead of two.
I've tried various quoting mechanisms in xml: "-Darg1=argument1 -Darg2=argument2", "-Darg1="argument1" -Darg2="argument2""
and also where it's invoked in java: "RULES_ARGS="+RulesArgs+" ", "RULES_ARGS=\""+RulesArgs+"\" " (in combination with the xml part).
etc. all with no desirable result.
I was wondering if anyone has tried to do something similar, and the working approach to the problem?
Thanks in advance.
Are you saying you're trying to run make with -D options? That won't work because make doesn't support -D.
If you're trying to pass some flags through the make command line to be used on the ant invocation, then you should do something like this: when you run make set a variable on the make command line:
make ANTFLAGS='-DFOO -DBAR -DBAZ'
Then inside your makefile, when you run ant, pass that variable:
runant:
ant $(ANTFLAGS) ...
If that's not what you're trying to do please clarify your question. For example, you say it's not working and no desirable result, but you don't give any details about what errors or incorrect behavior you see. Such error messages would go a long way towards clarifying exactly what's going on.

NetBeans Test File and Run file diff

In Netbeans after creating program and want to run a file, right click the mouse and two options are enabling,one is Test file and another one is run file. What is the difference, because i get confused so many times.
I guess from your question that you want to code some c++ or java kind of program and you want to run them individually.
So, I suggest you to use special editor for every kind of development.
Netbeans generally used for big developments (but u can use it for a single file as well) and it helps in so many other aspects...(which I suppose you don't require).
In Netbeans, Run may have different meanings depending on the type of project you're working on.
In a Java project, Run file with a green arrow means running the main method of a Java class.
You may even notice that the Run file option is grayed out if a class has not a main method.
In a Web or Enterprise project it means deploying the project to an associated application or web server.
The Test option means running any test cases for an individual file at a time or to an entire project at once. The tests cases are usually created with a Unit Test library like JUnit or TestNG. If you don't know what a unit test is you may like to read this for reference.
I hope it helps.

Modularizing JMeter tests?

We are looking to migrate our testing framework over to JMeter. We have 50 + test cases, each of them with repeating actions like Logging in and logging out for example. How can I modularize my approach?
What I'm looking for specifically is a "Add test item from file" so that I could for example, add the login code.
We also have things like connectionID's that need to be passed on every request. Is there anyway jMeter can AUTOMATICALLY replace all occurrences of it with a Jmeter variable? Atm the proxy-recorder records the actual connection string, and we have to manually replace that with ${connectionID}. Is there a better way?
This works fine for me.
Make a new thread group at the bottom of the test plan and put a simple controller in it. Inside the simple controller put the code you want to repeat. I use two simple controllers but one is actually a DB test case suite. While keeping whatever is inside the thread group enabled, make sure to put the thread group itself as disabled, or else it will execute again on its own.
Now, in any particular test case, add User Parameters and add a Module Controller. The Module Controller can point to the simple controller section(s) you made before. Have the simple controller with a ${variables} set, then override them here inside the particular test you are running by putting the variable in the User Parameters. Thus you get the different variables and tests with the same suite.
I put a Simple Controller inside the Simple Controller to add lengthy db tests. This ends up as
Thread Group > Simple Controller > Simple Controller > JDBC Request. All are renamed.
You can select different ones in the Module Controller inside the tests. I have about six right now but this gets repeated dozens of times.
This is all with a stock Jmeter 2.3 . If you are in an environment such that you can't install the plugins, this will work fine. I've never tried them
HTH
As far as automatically replacing the connection IDs, there is not, to my knowledge, a way to do that via the GUI. However, the test scripts are simple XML files and so it would be very easy to write a sed or awk script to do that replacement for you.
As far as the "add test file from here" part, in 2.6 (not sure about other versions, not used them) there is a logic controller called "Include Controller" that can load test snippets.There is also the ability to save snippets of test code called "test fragments" to their own .jmx files.
If you start a new test plan, right click on test plan then add -> test fragment -> test fragment this will add the container, then you can add your other requests in and use this chunk inside the aforementioned Include element.
If you are able to use the latest version, or if the prior versions support this, this may be a simpler option than writing your own plugin.
By using below Jmeter elements, we can modularize the test scripts.
Test Fragment
Module Controller
Parameterized Controller
Include Controller
Please check this for more details & examples.
http://www.testautomationguru.com/jmeter-modularizing-test-scripts/
I know 2 options for you:
Module Controller
Parameterized Controller
What I'm looking for specifically is a "Add test item from file" so that I could for example, add the login code.
Sounds like you might want to encapsulate some of that repeated logic in your own custom Samplers or Config Elements. There is a guide to writing plugins to JMeter on the project page.
This is the approach that we have taken on my current team for handling JMeter simulating requests of a custom RPC format.
One thing is you can run the jmeter scripts under "noGUI" model. You can specify the jmeter test scripts to run and put a batch of them into a .bat file. Like:
#echo
JMeter -n -t MyTestPlan1.jmx
JMeter -n -t MyTestPlan2.jmx
Another way I agree with #matt is you can write the plugin to get what your need.

Generating Unit Tests Automatically

I have a web tool which when queried returns generated Java classes based upon the arguments in the URL.
The classes we retrieve from the webserver change daily and we need to ensure that they still can process known inputs.
Note these classes do not test the webserver, they run locally and transform xml into a custom format. I am not testing the webserver.
These classes must then be placed in specific package structure compiled and run against a known set of input data and compared against known output data.
I would like to do this automatically each night to make sure that the generated classes are correct.
What is the best way to achieve this?
Specifically whats the best way to:
retrieve the code from a webserver and place it in a file
compile the code and then call it
I'm sure a mix of junit and ant will be able to achieve this but is there and standard solution / approach for this?
First up, to answer your question: No, I do not think that there is a standard approach for this. This sounds like quite an unusual situation ;-)
Given that, what I would do is to write your JUnit tests to all call a class GeneratedCode, and then, once you download the code, rename the class to GeneratedCode, compile, and run your unit tests.
You have the same goal as continuous integration ;-)
Maybe a bit overkill for this simple task, but this is the standard way to get something, compile something and test something regularly.
E.g. you could try hudson.
You should be creating a "mock" interface for your web service that (a) behaves the same way and (b) returns a known answer.
You should then do some other integration testing with the live web service where a person looks at the results and decides if they worked.
Can you only test the generated classes after they were published on the webservice ? You have no way to test during or just after the generation ?
One idea, if the generated code isn't to complex, is to load it via the GroovyClassLoader and to run your tests against it. See this page for examples.

Categories

Resources