I have a scala class which uses the java nio WatchService to detect creation of new folders in a specific directory.
The WatchService works well when the app is running and I manually copy a folder into the target folder.
I have created a unit test using scalatest that initializes my class and copies a test folder into the target folder using Apache Commons
FileUtils.copyDirectory(testFolder, new File(targetFolder, testFolder.getName), false)
The watch service does not detect any new entry created in the target folder within 30 seconds. My code is inside an eventually block similar to
eventually(timeout(Span(30, Seconds)), interval(Span(1, Seconds))) {
// CHECK IF THE SERVICE DETECTED THE NEW ENTRY
}
Any idea why this does not work in unit tests?
Just discovered the problem was in the way I used scalatest. I was trying to use a fixture to open/close my service in the features boundaries:
describe("The WatchService") {
withWatchService { watchService =>
it("should test feature 1") { /* test code here */ }
it("should test feature 2") { /* test code here */ }
}
}
The code above does not work: the watch service is closed before the features are completed. To make it work I have created a unique feature with the fixture nested inside it:
describe("The WatchService") {
it("should test features") {
withWatchService { watchService =>
/* test code here */
}
}
}
Related
So I'm getting this persistent error using netbeans. I've got a LinkedList class which I am testing via a JUnit test, which I created by clicking on LinkedList.java: Tools -> Create/Update Tests and this LinkedListTest.java class is now located in test packages.
My LinkedList.java file works correctly when tested in a file with a main method.
public class LinkedListTest {
#Test
public void testAddFirst() {
LinkedList linkedList = new LinkedList();
Country c1 = new Country("Australia");
linkedList.addFirst(c1);
assertEquals("Australias", linkedList.getValue(0)); // Should fail a test
} // default test methods beneath
All my imports check out. JUnit 5.3.1 and I had to download apiguardian1.1.0.jar from MVN repository to clear an error for:
reason: class file for org.apiguardian.api.API$Status not found
I right-click in this file and select Test File, or use Ctrl+F6, I've selected Test File from the original LinkedList file, I've even used Alt+F6 which tests the whole project. Yet I'm met with 'No tests executed.', an empty Test Results window, and no Notifications. What am I doing wrong?
Thanks for any help
Edit: I just switched from netbeans to eclipse.
You forget to extend Runner with class --
use like below with class -
public class LinkedListTest extends Runner {
}
Hope this help you.
Before modular application organization, I had one main JavaFX application that load custom created multiple libraries for different options and possibilities in main app.
In old way of implementation, I just send new library to update, main application reads all libraries from folder and it works like a charm. But in a modular system, if my application wants to use new modular library that I send, it needs to update its module-info file, apropos I need to send updates for modular library and for main application.
Just imagine, it would be like, chrome need to send browser update for every new plugin that is created. As I can see, with Java modularity system you can't create modular applications.
Is there a way import new module without updating main application or some other way around?
Java has a class for that: ServiceLoader.
If we assume you have a “service provider” interface named PluginProvider, other modules can declare themselves to provide that service by putting this in their respective module-info.java descriptors:
provides com.john.myapp.PluginProvider with com.library.MyProvider;
Your application would then state that it uses that service in its own module-info:
uses com.john.myapp.PluginProvider;
And your application’s code would create a ModuleFinder that looks in the directory (or directories) where you expect those plugin modules to reside, then pass that ModuleFinder to a Configuration which can be used to create a ModuleLayer for the ServiceLoader:
public class PluginLoader {
private final ServiceLoader<PluginProvider> loader;
public PluginLoader() {
Path pluginDir = Paths.get(System.getProperty("user.home"),
".local", "share", "MyApplication", "plugins");
ModuleLayer layer = PluginProvider.class.getModule().getLayer();
layer = layer.defineModulesWithOneLoader(
layer.configuration().resolveAndBind(
ModuleFinder.of(),
ModuleFinder.of(pluginDir),
Collections.emptySet()),
PluginProvider.class.getClassLoader());
loader = ServiceLoader.load(layer, PluginProvider.class);
}
public Stream<PluginProvider> getAll() {
return loader.stream();
}
public void reload() {
loader.reload();
}
}
You might even want to watch the plugin directory for new or removed files:
try (WatchService watch = pluginDir.getFileSystem().newWatchService()) {
pluginDir.register(watch,
StandardWatchEventKinds.ENTRY_CREATE,
StandardWatchEventKinds.ENTRY_DELETE,
StandardWatchEventKinds.ENTRY_MODIFY,
StandardWatchEventKinds.OVERFLOW);
WatchKey key;
while ((key = watch.take()).isValid()) {
loader.reload();
key.reset();
}
}
I am attempting to gather up all the files my tests generate (log file, screenshots, cucumber report etc) and send them via email. I'm doing this from the Runner class using JUnit's #AfterClass annotation.
#RunWith(Cucumber.class)
#CucumberOptions(
features = "src/test/resources/Features"
, glue = { "stepDefinition" }
, tags = { "#Teeest" }
, monochrome = true
, strict = false
, plugin = { /* "pretty", */ "html:target/cucumber-html-report","json:target/cucumber-html-report/cucumber-json-" + "report.json" })
public class TestRunner {
#AfterClass
public static void sendReport() {
SomeClass.sendMail();
}
}
Everything works fine except for the cucumber reports (both html and json), which are blank. When i manually check it, it looks good so I'm assuming it's generated sometime after this method is executed.
Does anyone have an idea of how I can get around this issue?
I'm thinking of either getting the Cucumber Reports plugin for Jenkins, writing a shell script to execute via Maven POM or a separate Java app that looks for the files, zips and sends the email.
All 3 of these ideas come with drawbacks so I'd really love another hook-type approach to this, if possible, as file locations have dynamic names and would be a lot easier to get the actual locations within the test suite than to look for them afterward.
Thanks!
PS: I am junior at this stuff so please don't hold back on details :)
I created a jar file that launches an application to allow the user to select which test they want to run. Each test has its own class and main method. The UI application passes parameters to the main method for the class that represents the test.
I was able to create the executable JAR file for the UI application but nothing happens when I select the test I want to run. I think it is that JAR file can only handle one main method.
Below is part of the code for the application that allows the user to select which test to run.
public class UI extends JFrame {
String[] args={Environment, Browser, TestingCoverage, DBLog, "UI"};
if (chckbxEQ_CA_Home.isSelected())
{
EQ_CA_Home.main(args);
}
if (chckbxEQ_CA_Condo.isSelected())
{
EQ_CA_Condo.main(args);
}
if (chckbxEQ_CA_Renter.isSelected())
{
EQ_CA_Renter.main(args);
}
}
You can do that using Junit library.
JUnitCore junit = new JUnitCore();
Result res = junit.run(yourTestClass);
Don't invoke the main() method to avoid any possible interruptions, just let the JUnitcore find the appropriate test methods.
And also don't forget to include Junit.jar in classpath (or) include it as part of your jar file (as a fat jar).
I'm reading up on Gradle and am very interested in it, specifically because (it appears) that it allows the introduction of inheritance into the build process. For instance, if I have a Java web app that might be packaged and deployed to Google App Engine instances as well as Amazon EC2 instances, I need a sophisticated build that can take the same Java, XML, PROPERTIES, CSS and image files and package/deploy them into 2 drastically-differently packaged WAR files.
GAE apps are very specific as to how they are packaged; EC2 (pretty much) just require that you conform to servlet specs. GAE apps get "deployed" by running an update command from the appcfg.sh script that comes with your SDK; EC2 has their own way to deploy apps. The point is, they are very different packaging/deployment processes for both PaaS providers:
public abstract class PackageTask {
// ...
}
// Package my Eclipse project for deployment to GAE.
public class AppEnginePackageTask extends PackageTask {
// ...
}
// Package my Eclipse project for deployment to EC2 instances.
public class AmazonPackageTask extends PackageTask {
// ...
}
public abstract class DeployTask {
// ...
}
// Deployment to GAE.
public class AppEngineDeployTask extends DeployTask {
// ...
}
// Deployment to EC2.
public class AmazonDeployTask extends DeployTask {
// ...
}
Then, I might have a myapp.gradle buildfile that templates the build order of tasks:
clean()
compile()
package()
deploy()
...and somehow, I can configure/inject AppEnginePackageTask/AppEngineDeployTask in place of package()/deploy() for a GAE-based build, or I can configure/inject AmazonPackageTask/AmazoneDeployTask into those templated tasks. Again, I'm not sure how to do this (or even if Gradle can do this), but it's what I'm after.
My understanding was that Gradle can do this. Ant can also be forced to have highly-modular, elegant builds that work this way, but being XML-based, it takes some finessing, whereas an OOP-based language like Groovy makes this cleaner and simpler.
However, all the examples I see of Gradle tasks take the following form:
task package(dependsOn: 'compile') {
// ...
}
task deploy(dependsOn: 'package') {
// ...
}
So I ask: these look/feel like non-OOP task definitions. Is my understanding of Gradle (and its OOP nature) fundamentally incorrect? What am I missing here? How can I accomplish these notions of "configurable/injectable build templates" and inheritance-based tasks? Thanks in advance!
Edit I re-tagged this question with "groovy" because Gradle buildscripts are written in a Groovy DSL, and someone who happens to be a Groovy-guru (say that 5 times fast) might also be able to chime in even if they know little about Gradle.
As described here, there are simple tasks and enhanced tasks. The latter are much more flexible and powerful.
The following example isn't exactly what you describe, re:injection, but it illustrates OOP.
Here is the sample build.gradle file. It avoids "package" as that is a keyword in Java/Groovy. The 'build' target depends on 'compile' and some flavour of 'doPack', depending on a property called 'pkgTarget'.
task compile << {
println "compiling..."
}
task build() << {
}
build.dependsOn {
compile
}
build.dependsOn {
if (pkgTarget == "Amazon") {
task doPack(type: AmazonPackageTask)
} else if (pkgTarget == "Google") {
task doPack(type: GooglePackageTask)
} else {
task doPack(type: MyPackageTask)
}
}
where the tasks are defined later in the same file. (Per doc, this code can go into a "build src" directory):
// -----
class MyPackageTask extends DefaultTask {
def init() { println 'common stuff' }
#TaskAction
def doPackage() {
println 'hello from MyPackageTask'
}
}
class AmazonPackageTask extends MyPackageTask {
#TaskAction
def doPackage() {
init()
println 'hello from AmazonPackageTask'
}
}
class GooglePackageTask extends MyPackageTask {
#TaskAction
def doPackage() {
init()
println 'hello from GooglePackageTask'
}
}
and here is the gradle.properties file:
pkgTarget=Amazon