I´m writting a new listener to open and close the database connection at start and end of the suite. I defined several parameters in the testng.xml file to store the database credentials, for example:
<parameter name="DB.user" value="user"/>
Now, I´m trying to get this parameters in the listener:
public class DatabaseListener implements ISuiteListener {
#Override
public void onStart(ISuite suite) {
System.out.println( suite.getXmlSuite().getParameter("DB.user"));
}
This works fine, but I also need this parameters will be updated if I overwrite the value by command line:
-DDB.user=test
I´m always getting the value defined in the testng.xml (user) instead the value I set in console (test).
For example, in a test I do the next to get the runtime value:
#BeforeClass()
#Parameters({"DB.user"})
public void beforeClass(String DBUser)
How can I do this in the listener?
Thanks and regards.
TestNG is capable of updating <parameters> by using their names and then overriding their values with the values provided via JVM arguments (-D). But that happens at a much later stage and well before an actual #Test method is invoked.
But its not true when TestNG is invoking suite level listeners, which is what you are doing. At this point, TestNG is still just constructing the suite file and the above mentioned parameter value resolution is yet to happen. That explains why you always see the value that is present in the suite file.
One way of dealing with this would be to do the following:
public class DatabaseListener implements ISuiteListener {
#Override
public void onStart(ISuite suite) {
String fromSuite = suite.getXmlSuite().getParameter("DB.user");
String parameter = System.getProperty("DB.user", fromSuite);
System.out.println( parameter);
}
As you can see, you are explicitly querying from the System properties and at the same time defining a default value which comes from your suite. Kind of works the same way as TestNG resolves parameters (but at a later stage)
Related
I have this method that I am using in a NetBeans plugin:
public static SourceCodeFile getCurrentlyOpenedFile() {
MainProjectManager mainProjectManager = new MainProjectManager();
Project openedProject = mainProjectManager.getMainProject();
/* Get Java file currently displaying in the IDE if there is an opened project */
if (openedProject != null) {
TopComponent activeTC = TopComponent.getRegistry().getActivated();
DataObject dataLookup = activeTC.getLookup().lookup(DataObject.class);
File file = FileUtil.toFile(dataLookup.getPrimaryFile()); // Currently opened file
// Check if the opened file is a Java file
if (FilenameUtils.getExtension(file.getAbsoluteFile().getAbsolutePath()).equalsIgnoreCase("java")) {
return new SourceCodeFile(file);
} else {
return null;
}
} else {
return null;
}
}
Basically, using NetBeans API, it detects the file currently opened by the user in the IDE. Then, it loads it and creates a SourceCodeFile object out of it.
Now I want to unit test this method using JUnit. The problem is that I don't know how to test it.
Since it doesn't receive any argument as parameter, I can't test how it behaves given wrong arguments. I also thought about trying to manipulate openedProject in order to test the method behaviour given some different values to that object, but as far as I'm concernet, I can't manipulate a variable in JUnit that way. I also cannot check what the method returns, because the unit test will always return null, since it doesn't detect any opened file in NetBeans.
So, my question is: how can I approach the unit testing of this method?
Well, your method does take parameters, "between the lines":
MainProjectManager mainProjectManager = new MainProjectManager();
Project openedProject = mainProjectManager.getMainProject();
basically fetches the object to work on.
So the first step would be to change that method signature, to:
public static SourceCodeFile getCurrentlyOpenedFile(Project project) {
...
Of course, that object isn't used, except for that null check. So the next level would be to have a distinct method like
SourceCodeFile lookup(DataObject dataLookup) {
In other words: your real problem is that you wrote hard-to-test code. The "default" answer is: you have to change your production code, to make easier to test.
For example by ripping it apart, and putting all the different aspects into smaller helper methods.
You see, that last method lookup(), that one takes a parameter, and now it becomes (somehow) possible to think up test cases for this. Probably you will have to use a mocking framework such as Mockito to pass mocked instances of that DataObject class within your test code.
Long story short: there are no detours here. You can't test your code (in reasonable ways) as it is currently structured. Re-structure your production code, then all your ideas about "when I pass X, then Y should happen" can work out.
Disclaimer: yes, theoretically, you could test the above code, by heavily relying on frameworks like PowerMock(ito) or JMockit. These frameworks allow you to contol (mock) calls to static methods, or to new(). So they would give you full control over everything in your method. But that would basically force your tests to know everything that is going on in the method under test. Which is a really bad thing.
I am creating a Unit test case using Junit . Now My application is Maven Based with many profile Also I am using the Values from configuration file (Property File ) which Varies from one profile to other. I want that Unit Test run will have specified properties only not the profile one when it is running the test cases.
These can be done in 2 ways
1) Either i Mock the Property File for Unit Test .( which i dont know How) .
2) Or during run time i change the property file parameter values.(Again difficult to answer) . Any help will be appreciated .
One option: use dependency injection in order to acquire a java.util.Properties object for example.
Meaning: your production code simply holds a Properties object; like:
class Foo {
private final Properties properties;
public Foo(Properties) {
this.properties = properties;
At runtime, when the class that creates Foo objects reads property files from disk, turns them into a Properties object and gives it to the Foo constructor.
In your unit test, your test code creates a Properties object and adds whatever values you require upon creating a Foo object.
The less elegant detour: make sure that your production code reads its properties from a location that gets defined at runtime. That would allow you to create custom property files in some temp directory, and then you instruct your code under test to work with those files.
Did not got your question exactly - In one side, you are saying: you don't want to use profile related values and other side you are saying: you'll need to run with specific values (do you mean runtime values or test specific values).
Now, To answer your 1st question:
1) Either i Mock the Property File for Unit Test:
you can instantiate and load a property file(test specific) and keep the required values there in that file.
2) Or during run time i change the property file parameter values:
you can mock specific property keys with values. like below:
public void shouldBuyBread() throws Exception {
//given
given(mypropertyUtil.getProperty("NUMBER_OF_BREADS")).willReturn(10);
//when
Goods goods = shop.buyBread();
//then
assertThat(goods, containBread());
}
I have essentially the same question as here but am hoping to get a less vague, more informative answer.
I'm looking for a way to configure DropWizard programmatically, or at the very least, to be able to tweak configs at runtime. Specifically I have a use case where I'd like to configure metrics in the YAML file to be published with a frequency of, say, 2 minutes. This would be the "normal" default. However, under certain circumstances, I may want to speed that up to, say, every 10 seconds, and then throttle it back to the normal/default.
How can I do this, and not just for the metrics.frequency property, but for any config that might be present inside the YAML config file?
Dropwizard reads the YAML config file and configures all the components only once on startup. Neither the YAML file nor the Configuration object is used ever again. That means there is no direct way to configure on run-time.
It also doesn't provide special interfaces/delegates where you can manipulate the components. However, you can access the objects of the components (usually; if not you can always send a pull request) and configure them manually as you see fit. You may need to read the source code a bit but it's usually easy to navigate.
In the case of metrics.frequency you can see that MetricsFactory class creates ScheduledReporterManager objects per metric type using the frequency setting and doesn't look like you can change them on runtime. But you can probably work around it somehow or even better, modify the code and send a Pull Request to dropwizard community.
Although this feature isn't supported out of the box by dropwizard, you're able to accomplish this fairly easy with the tools they give you. Note that the below solution definitely works on config values you've provided, but it may not work for built in configuration values.
Also note that this doesn't persist the updated config values to the config.yml. However, this would be easy enough to implement yourself simply by writing to the config file from the application. If anyone would like to write this implementation feel free to open a PR on the example project I've linked below.
Code
Start off with a minimal config:
config.yml
myConfigValue: "hello"
And it's corresponding configuration file:
ExampleConfiguration.java
public class ExampleConfiguration extends Configuration {
private String myConfigValue;
public String getMyConfigValue() {
return myConfigValue;
}
public void setMyConfigValue(String value) {
myConfigValue = value;
}
}
Then create a task which updates the config:
UpdateConfigTask.java
public class UpdateConfigTask extends Task {
ExampleConfiguration config;
public UpdateConfigTask(ExampleConfiguration config) {
super("updateconfig");
this.config = config;
}
#Override
public void execute(Map<String, List<String>> parameters, PrintWriter output) {
config.setMyConfigValue("goodbye");
}
}
Also for demonstration purposes, create a resource which allows you to get the config value:
ConfigResource.java
#Path("/config")
public class ConfigResource {
private final ExampleConfiguration config;
public ConfigResource(ExampleConfiguration config) {
this.config = config;
}
#GET
public Response handleGet() {
return Response.ok().entity(config.getMyConfigValue()).build();
}
}
Finally wire everything up in your application:
ExampleApplication.java (exerpt)
environment.jersey().register(new ConfigResource(configuration));
environment.admin().addTask(new UpdateConfigTask(configuration));
Usage
Start up the application then run:
$ curl 'http://localhost:8080/config'
hello
$ curl -X POST 'http://localhost:8081/tasks/updateconfig'
$ curl 'http://localhost:8080/config'
goodbye
How it works
This works simply by passing the same reference to the constructor of ConfigResource.java and UpdateConfigTask.java. If you aren't familiar with the concept see here:
Is Java "pass-by-reference" or "pass-by-value"?
The linked classes above are to a project I've created which demonstrates this as a complete solution. Here's a link to the project:
scottg489/dropwizard-runtime-config-example
Footnote: I haven't verified this works with the built in configuration. However, the dropwizard Configuration class which you need to extend for your own configuration does have various "setters" for internal configuration, but it may not be safe to update those outside of run().
Disclaimer: The project I've linked here was created by me.
I solved this with bytecode manipulation via Javassist
In my case, I wanted to change the "influx" reporter
and modifyInfluxDbReporterFactory should be ran BEFORE dropwizard starts
private static void modifyInfluxDbReporterFactory() throws Exception {
ClassPool cp = ClassPool.getDefault();
CtClass cc = cp.get("com.izettle.metrics.dw.InfluxDbReporterFactory"); // do NOT use InfluxDbReporterFactory.class.getName() as this will force the class into the classloader
CtMethod m = cc.getDeclaredMethod("setTags");
m.insertAfter(
"if (tags.get(\"cloud\") != null) tags.put(\"cloud_host\", tags.get(\"cloud\") + \"_\" + host);tags.put(\"app\", \"sam\");");
cc.toClass();
}
While creating new scenarios I only want to test the scenario I am currently working with. For this purpose I want to use the Meta: #skip tag before my scenarios. As I found out I have to use the embedder to configure the used meta tags, so I tried:
configuredEmbedder().useMetaFilters(Arrays.asList("-skip"));
but actually this still has no effect on my test scenarios. I used it in the constructor of my SerenityStories test suite definition. Here is the complete code of this class:
public class AcceptanceTestSuite extends SerenityStories {
#Managed
WebDriver driver;
public AcceptanceTestSuite() {
System.setProperty("webdriver.chrome.driver", "D:/files/chromedriver/chromedriver.exe");
System.setProperty("chrome.switches", "--lang=en");
System.setProperty("restart.browser.each.scenario", "true");
configuredEmbedder().useMetaFilters(Arrays.asList("-skip"));
runSerenity().withDriver("chrome");
}
#Override
public Configuration configuration() {
Configuration configuration = super.configuration();
Keywords keywords = new LocalizedKeywords(DEFAULTSTORYLANGUAGE);
Properties properties = configuration.storyReporterBuilder().viewResources();
properties.setProperty("encoding", "UTF-8");
configuration.useKeywords(keywords)
.useStoryParser(new RegexStoryParser(keywords, new ExamplesTableFactory(new LoadFromClasspath(this.getClass()))))
.useStoryLoader(new UTF8StoryLoader()).useStepCollector(new MarkUnmatchedStepsAsPending(keywords))
.useDefaultStoryReporter(new ConsoleOutput(keywords)).storyReporterBuilder().withKeywords(keywords).withViewResources(properties);
return configuration;
}
}
Is this the wrong place or have I missed something? Still all scenarios are executed.
EDIT:
I changed following classes and now I think that it "works"
public AcceptanceTestSuite() {
System.setProperty("webdriver.chrome.driver", "D:/files/chromedriver/chromedriver.exe");
System.setProperty("chrome.switches", "--lang=de");
System.setProperty("restart.browser.each.scenario", "true");
this.useEmbedder(configuredEmbedder());
runSerenity().withDriver("chrome");
}
#Override
public Embedder configuredEmbedder() {
final Embedder embedder = new Embedder();
embedder.embedderControls()
.useThreads(1)
.doGenerateViewAfterStories(true)
.doIgnoreFailureInStories(false)
.doIgnoreFailureInView(false)
.doVerboseFailures(true);
final Configuration configuration = configuration();
embedder.useConfiguration(configuration);
embedder.useStepsFactory(stepsFactory());
embedder.useMetaFilters(Arrays.asList("-skip"));
return embedder;
}
But now I get the message [pool-1-thread-1] INFO net.serenitybdd.core.Serenity - TEST IGNORED but the scenario is still executed. Only in the result page I get the info that this scenario is ignored (but still executed). Is there a way to SKIP the scenario so it won't run?
I could not make it run with using configuredEmbedder() but by adding -Dmetafilter="+working -finished" as goals in my mvn run configurations and using the tags #working for scenarios I'm working with and which I want to run and #finsihed for scenarios I don't want to execute. Still I have to change the run configuration if I want to change the meta tags so it is not very comfortable but still I get what I was looking for.
As long as you document it well (some doc in https://github.com/serenity-bdd/the-serenity-book would be brilliant), I think as a JBehave/Serenity user you are well enough placed to decide which option makes the most sense.
Investigation
I debugged the serenity-jbehave classes, trying to understand why setting
configuredEmbedder().useMetaFilters(Collections.singletonList("-skip"))
is not working in all the possible places I put it within my class extending the SerenityStories, I found the strategic code place where metaFilters in ExtendedEmbedder#embedder are overwritten with what we define in our class into settings from serenity-jbehave.
This method is SerenityReportingRunner#createPerformableTree:
private PerformableTree createPerformableTree(List<CandidateSteps> candidateSteps, List<String> storyPaths) {
ExtendedEmbedder configuredEmbedder = this.getConfiguredEmbedder();
configuredEmbedder.useMetaFilters(getMetaFilters());
BatchFailures failures = new BatchFailures(configuredEmbedder.embedderControls().verboseFailures());
PerformableTree performableTree = configuredEmbedder.performableTree();
RunContext context = performableTree.newRunContext(getConfiguration(), candidateSteps,
configuredEmbedder.embedderMonitor(), configuredEmbedder.metaFilter(), failures);
performableTree.addStories(context, configuredEmbedder.storyManager().storiesOfPaths(storyPaths));
return performableTree;
}
This line changes the set metaFilters:
configuredEmbedder.useMetaFilters(getMetaFilters());
It overrides the current metaFilters value.
Going further the call chain, we get to the logic that defines from where it gets metaFilters, i.e. where we can actually set it.
SerenityReportingRunner#createPerformableTree
↓
SerenityReportingRunner#getMetaFilters
↓
SerenityReportingRunner#getMetafilterSetting
This is the method we need!
private String getMetafilterSetting() {
Optional<String> environmentMetafilters = getEnvironmentMetafilters();
Optional<String> annotatedMetafilters = getAnnotatedMetafilters(testClass);
Optional<String> thucAnnotatedMetafilters = getThucAnnotatedMetafilters(testClass);
return environmentMetafilters.orElse(annotatedMetafilters.orElse(thucAnnotatedMetafilters.orElse("")));
}
As we see here, the metaFilters can be defined in three places, and they override each other. In the priority lowering order, they are:
Value of metafilter (exactly all lowercase!) VM property.
Value of on net.serenitybdd.jbehave.annotations.Metafilter annotation on our SerenityStories class.
Value of on net.thucydides.jbehave.annotations.Metafilter annotation on our SerenityStories class. This annotation is deprecated, but left in place for backwards-compatibility.
Solution that is working with the current serenity-jbehave version
I've tried/debugged all these three options, they work and override each other as described above.
1. Use environment metafilter property
Added this to my JVM run arguments:
-Dmetafilter=skip
2. Use the modern #Metafilter annotation
import net.serenitybdd.jbehave.SerenityStories;
import net.serenitybdd.jbehave.annotations.Metafilter;
#Metafilter("-skip")
public class Acceptance extends SerenityStories {
3. Use the deprecated #Metafilter annotation
import net.serenitybdd.jbehave.SerenityStories;
import net.thucydides.jbehave.annotations.Metafilter;
#Metafilter("-skip") // warned as deprecated
public class Acceptance extends SerenityStories {
Solution for my current project is to use the current #Metafilter("-skip") annotation on my test class, to not depend on/have to change VM properties of the particular Jenkins/local dev execution.
Possible pull request to make
https://github.com/serenity-bdd/serenity-core/issues/95 — here Serenity guys have suggested me to do a PR with this fix, since they are not concentrated on Serenity + JBehave now.
I understand where to make the changes (in the code chain described above), but I don't know what overriding logic should be:
— MetaFilters from configuredEmbedder override any of ENV/annotation MetaFilters.
OR
— Any ENV/annotation MetaFilters override Metafilters from configuredEmbedder
OR
— MetaFilters from configuredEmbedder are merged with ENV/annotation MetaFilters. This option required merging priority.
Any suggestions?
In any type of fix, I would prefer add the explicit logs about how the overriding is now working into SerenityReportingRunner#getMetafilterSetting, since the current behaviour is really non-obvious and took lots of time to investigate.
I have a Parameterized test that is fed, say, with files:
#RunWith(Parameterized.class)
public class FileTest {
...
public static Collection<Object[]> data() {
return IteratorUtils.toList( FileUtils.iterateFiles(testFilesDir
, TrueFileFilter.INSTANCE
, (IOFileFilter) null) );
}
Whether it's files on a file system, rows from a table or URLs makes no difference, really. Just a Parameterized test that's fed with a large amount of data points and takes a long time to conclude.
Now I am running the test, say 10,000 files and I detect a problem with file #9,203. I fix the bug and to
verify the fix I want to re-run the test, but only for this particular file (because I can't wait 2 hours). Subsequent re-runs (after the fix is verified) should of course comprise the entire data set.
Is there any way to do that, e.g. by supplying some run-time parameters in a console-invocation of JUnit so that only one particular data point is used?
OK, so in the end I found a way to accomplish this. Use a constructor for your parameterized test class that also takes a friendly name that you can easily pass from the command line. E.g. something like:
private final File testFile;
private final String friendlyTestName;
public FileTest(File testFile, String friendlyTestName) {
this.testFile = testFile;
this.friendlyTestName = friendlyTestName;
}
Of course, you would then have to generate the appropriate tuples in the method that provides the data points. E.g. in the example below the friendly name is simply the filename of the test file (without the path; let's assume that they are unique):
#Parameters(name= "{index}: {1}")
public static Collection<Object[]> data() {
Collection<File> _rv = IteratorUtils.toList( FileUtils.iterateFiles(testFilesDir, TrueFileFilter.INSTANCE, (IOFileFilter) null) );
Collection<Object[]> rv = new ArrayList<>();
for (File f : _rv)
rv.add(new Object[]{f, f.getName()});
return rv;
}
Then, when invoking Ant from the command line pass a target-friendly-name parameter:
ant -Dtarget-friendly-name=a-005 test
... and make sure it is conveyed all the way to the junit Ant task. E.g. in your build.xml file you should have something like:
<junit printsummary="${junit.summary}" showoutput="${junit.output}">
<sysproperty key="target-friendly-name" value="${target-friendly-name}"/>
...
</junit>
Finally, in the test method itself use assumeTrue to demand that the friendly name of the data point equals the target friendly name (if present; otherwise all tests are run).
#Test
public void testFile() {
assumeTrue( (targetFriendlyName==null)||(targetFriendlyName.equals(friendlyTestName)) );
...
}
I was looking for a way to directly use the {index} property of the Parameters annotation which would have removed the need to define a separate friendlyName but haven't figured a way to do so; hence this solution requires the unnatural addition of a friendly name field in the test class.