In Netbeans after creating program and want to run a file, right click the mouse and two options are enabling,one is Test file and another one is run file. What is the difference, because i get confused so many times.
I guess from your question that you want to code some c++ or java kind of program and you want to run them individually.
So, I suggest you to use special editor for every kind of development.
Netbeans generally used for big developments (but u can use it for a single file as well) and it helps in so many other aspects...(which I suppose you don't require).
In Netbeans, Run may have different meanings depending on the type of project you're working on.
In a Java project, Run file with a green arrow means running the main method of a Java class.
You may even notice that the Run file option is grayed out if a class has not a main method.
In a Web or Enterprise project it means deploying the project to an associated application or web server.
The Test option means running any test cases for an individual file at a time or to an entire project at once. The tests cases are usually created with a Unit Test library like JUnit or TestNG. If you don't know what a unit test is you may like to read this for reference.
I hope it helps.
Related
I've got a bunch of Selenium tests in my project and I'd love running them with IDEA. I need to pass certain VM arguments (where my firefox binary is located etc.) and I don't want to create a run config for every Test class that I have.
There are also too many tests to just run all every time.
So, does anyone know if it's possible to create a "parent" run config which would be used for all tests in a certain path whether I run them together or just a single one?
Now I feel silly :P
Run Configurations has a Defaults tab where you can set default values for JUnit tasks
I've seen some examples showing that running a JUnit class with the JUnitCore.runClasses(Test.class). However, we can easily run the JUnit test classes by right clicking the class file and "Run With->JUnit" in most IDEs. Then, my question is: with the IDEs, what's the usage of JUnitCore.runClasses? Is it still necessary to write JUnit classes using JUnitCore.runClasses?
JUnitCore#runClasses is usually used when you want to write a program to tests (i.e., a runner).
Since you're running from inside an IDE, there's probably no reason for you to use it in this scenario.
I am very new to TDD in general so please forgive me if my question does not make lots of sense.
After looking looking around for a bit, it seems that jUnit is capable of implement integration test. I am hoping that the community can provide me some guidance on how to write integration test. Here is a simple overview of my design.
I have Main1, that accept a list of zip files. Main will extract the zip files, edit the content of the pdf inside the zip files, and put the final pdf files into folder X. If the number of pdf reach a THRESHOLD, then Main2Processor (not a main class) will get invoked and zip all the pdf files, and also create a report text file with the same name as the newly create zip file.
If I run Main2, it also will kick off Main2Processor, which will zip the pdf file and create text file reports (even though the number of pdf in folder X did not reach a THRESHOLD).
How do I write integration test testing the correctness for my above design?
You're right; JUnit can be used to write tests that would be called integration tests. All you have to do is relax the rules regarding tests not touching external resources.
First, I would refactor the main() of your application to do as little as you can possibly make it do; there isn't a really good way to test the code in a main() function. Have it construct and run an object (that object can be the object containing main(), if you wish), passing that new object your list of ZIP files. That object can now be tested using JUnit by just instantiating it.
Now, you just have to architect the test to set up a constant test environment and then perform a repeatable test. Create, or clear out, a temp directory somewhere, then copy over some test ZIP files into that directory. Then, run your main processor.
To detect that the proper behavior occurs when the threshold is reached, you just test for the existence of a zip file (and/or its absence if the threshold isn't reached).
Do you really want an "integration test" (that term is overloaded beyond comprehension now so if you could state your end-goals that'd help) ? How about an acceptance test where you use this console/GUI app like a real user with specific input and check for the expected output?
JUnit is just a test runner and is oblivious of what the test actually does. So yes, you could use it to write any test. However it has been built for unit-testing and that will leak sometimes.. e.g. the fact that the test shuts down on the first error / the first assert that does not succeed. Usually coarse-tests like to push ahead and get a bunch of errors at the end.
If you want an integration test - you would have to redesign your app to be callable from tests (if it is not already so). Raise specific obstacles to writing this test and I can offer more specific advice.
I think that you should create some utility methods for your tests. For example running applicaiton, checking directory, clearing directory etc.
Then you will be able to implement test like the following:
#Test
public mytest1() {
exec(Main1.class, "f1.zip", "f2.zip");
Assert.assertTrue(getFileCount(OUTPUT_DIR) < THRESHOLD);
// perform verification of your files etc...
}
First of all you might describe your above specification, but from a "test sequence" point of view. For example, test one would provide to Main1 a set of N pdf files, N being under the threshold. Then your test code, after Main1 returned, would check X folder content, as well as reports, to verify that your expectations are met.
JUnit itself just helps running test case, it does not really help writing the tests.
And JUnit is "unit test" oriented (but you can use it as well for integration tests, despite some situations does not suit well; when a global setup is required for example, or when the test cases are expected to be run in a specific order...).
Some additional libraries can help greatly to interact easily with the rest of your code : dbunit, httpunit and so on.
I am continuing the development of a serialization layer generator. The user enters a description of types (currently in XSD or in WSDL), and the software produces code in a certain target language (currently, Java and ansi C89) which is able to represent the types described and which is also able to serialize (turn into a byte-sequence) and deserialize these values.
Since generating code is tricky (I mean, writing code is hard. Writing code that writes code is writing code to do a hard thing, which is a whole new land of hardness :) ). Thus, in the project which preceded my master thesis, we decided that we want some system tests in place.
These system tests know a type and a number of pairs of values and byte sequences. In order to execute a system test in a certain language, the type is run through the syste, resulting in code as described above. This code is then linked with some handwritten host-code, which is capable of reading these pairs of a byte sequence and a value and functions to read values of the given value from a string. The resulting executable is then run and the byte-value-pairs are fed into this executable and it is overall checked if all such bindings result in the output "Y". If this is the case, then these example values for the type serialize into the previously defined byte sequence and we can conclude that the generated code compiles and runs correctly, and thus, overall, that the part of the system handling this type is correct. This is a very good thing.
However, right now I am a bit unhappy with the current implementation. Currently, I have written a custom junit runner which uses quite a lot of reflection sorcery in order to read these byte-value-bindings from a classes attributes. Also, the overall stack to generate the code requires a lot of boilerplate code and boilerplate classes which do little more than to contain two or three strings. Even worse, it is quite hard to get a good integration with all tools which base on Junits descriptions and which generate test failure reports. It is quite hard to actually debug what is happening if the helpful maven Junit testrunner or the eclipse test runner gobble up whatever errors the compiler threw, just because the format of this error is different from junits own assertion errors.
Even worse, a single failed test in the generated code causes the maven build to fail. This is very annoying. I like it if the maven build fails if a certain test of a different unit fails, because (for example), if a certain depth first preorder calculation fails for some reason, everything will go haywire. However, if I just want to show someone some generated code for a type I know working, then it is very annoying if I cannot quickly build my application because the type I am working on right now is not finished.
So, given this background, how can I get a nice automated system which checks these generation specifications? Possibilities I have considererd:
A Junit integrated solution appears to be less than ideal, unless I can improve the integration of maven and junit and junit with my runner and everything else.
We used fitnesse earlier, but overall ditched it, because it caused more problems than it solved. The major issues we had were integration into maven and hudson.
A solution using texttest. I am not entirely convinced, because this mostly wants an executable, strings to put on stdin and strings to expect on stdout. Adding the whole "run application, link with host code and THEN run the generated executable" seems kinda complicated.
Writing my own solution. This will of course work and do what I want. However, this will be the most time consuming task, as usual.
So... do you see another possible way to do this while avoiding to write something own?
You can run Maven with -Dmaven.test.skip=true. Netbeans has a way to set this automatically unless you explicitly hit one of the commands to test the project, I don't know about Eclipse.
I am working on a small team of web application developers. We edit JSPs in Eclipse on our own machines and then move them over to a shared application server to test the changes. I have an Ant script that will take ALL the JSPs on my machine and move them over to the application server, but will only overwrite JSPs if the ones on my machine are "newer". This works well most of the time, but not all of the time. Our update method doesn't preserve file change day/times, so it is possible that an Update on my machine will set the file day/time to now instead of when the file was actually last changed. If someone else worked on that file 1 hour ago (but hasn't committed the changes yet), then the older file on my PC will actually have a newer date. So when I run the Ant script it will overwrite their changes with an older file.
What I am looking for is an easy way to just move the file I am currently working on. Is there a way to specify the "current" file in an Ant script? Or an easy way to move the current file within Eclipse? Perhaps a good plugin to do this kind of stuff? I could go out to Windows Explorer to separately move the file, but I would much prefer to be able to do it from within Eclipse.
Add a target to your ant build file to copy a single jsp using a command line property definition as #matt b described.
Create a new external tool launch profile and use the "String Substitution Preferences" to pass in the reference to the active file in the editor (resource_name).
See Eclipse Help | Java Development User Guide | Reference | Preferences | Run/Debug | Launching | String Substitution
How would Ant know what file was "current"? It has no way of knowing.
You could pass the name of the file into your Ant script (by taking advantage of the fact that any arguments you pass into Ant with -D are automatically parameters in your script)...
ant -Dfile=myfile.jsp update
and have your script look something like...
<copy file=${myfile} todir="blah"/>
...but it would probably be a pain to constantly type in the name of the file on the commandline.
Honestly, the type of problem you've described are inevitable when you have multiple developers sharing an environment. I think that a better approach for you and your team long-term is to have each developer work/test on a local application server instance, before the code is promoted. This removes all headaches, bottlenecks, and scheduling trouble in sharing an app server with other people.
You should really use source control any time you have multiple people working on the same thing (well, you should use it any time regardless, but that's a different conversation). This way, when conflicts like this occur, the tool will know and make someone perform the merge so that no one's changes get lost. Then, the test server can run on a clean checkout from source, and each developer can also test the full application locally, because everyone's changes will be instantly available to them through the source repository.
I suggest you to use source control. I prefer Subversion. You can use CruiseControl to make the build automatically whenever someone commits new code.
The antrunner4e plugin is exactly what you are looking for -- see http://sourceforge.net/projects/antrunner4e/