Unit testing a Swing component - java

I am writing a TotalCommander-like application. I have a separate component for file list, and a model for it. Model support listeners and issues a notification for events like CurrentDirChanged etc. in following manner:
private void fireCurrentDirectoryChanged(final IFile dir) {
if (SwingUtilities.isEventDispatchThread())
for (FileTableEventsListener listener : tableListeners)
listener.currentDirectoryChanged(dir);
else {
SwingUtilities.invokeLater(new Runnable() {
public void run() {
for (FileTableEventsListener listener : tableListeners)
listener.currentDirectoryChanged(dir);
}
});
}
}
I've written a simple test for this:
#Test
public void testEvents() throws IOException {
IFile testDir = mockDirectoryStructure();
final FileSystemEventsListener listener =
context.mock(FileSystemEventsListener.class);
context.checking(new Expectations() {{
oneOf(listener).currentDirectoryChanged(with(any(IFile.class)));
}});
FileTableModel model = new FileTableModel(testDir);
model.switchToInnerDirectory(1);
}
This does not work, because there is no EventDispatchThread. Is there any way to unit test this inside the headless build?
unit-testing java swing jmock

Note, generally speaking unit testing on UI stuff is always difficult because you have to mock out a lot of stuff which is just not available.
Therefore the main aim when developing applications (of any type) is always to try to separate UI stuff from the main application logic as much as possible. Having strong dependencies here, make unit testing really hard, a nightmare basically. This is usually leveraged by using patterns like a MVC kind of approach, where you mainly test your controller classes and your view classes do nothing than constructing the UI and delegating their actions and events to the controllers. This separates responsibilities and makes testing easier.
Moreover you shouldn't necessarily test things which are provided by the framework already such as testing whether events are correctly fired. You should just test the logic you're writing by yourself.

Look this:
FEST is a collection of libraries, released under the Apache 2.0 license, whose mission is to simplify software testing. It is composed of various modules, which can be used with TestNG or JUnit...

Check the uispec4j project. That's what I use to test my UIs.
www.uispec4j.org

I think the problem with testing is revealing a problem with the code. It shouldn't really be the model's job to decide whether it's running in the dispatch thread, that's too many responsibilities. It should just do its notification job and let a calling component decide whether to call it directly or to invokeLater. That component should be in the part of the code that knows about Swing threads. This component should only know about files and such.

I've only been working with jMock for two days... so please excuse me if there is a more elegant solution. :)
It seems like your FileTableModel depends on SwingUtilities... have you considered mocking the SwingUtilities that you use? One way that smells like a hack but would solve the problem would be to create an interface, say ISwingUtilities, and implement a dummy class MySwingUtilities that simply forwards to the real SwingUtilities. And then in your test case you can mock up the interface and return true for isEventDispatchThread.
#Test
public void testEventsNow() throws IOException {
IFile testDir = mockDirectoryStructure();
final ISwingUtilities swingUtils = context.mock( ISwingUtilities.class );
final FileSystemEventsListener listener =
context.mock(FileSystemEventsListener.class);
context.checking(new Expectations()
{{
oneOf( swingUtils ).isEventDispatchThread();
will( returnValue( true ) );
oneOf(listener).currentDirectoryChanged(with(any(IFile.class)));
}});
FileTableModel model = new FileTableModel(testDir);
model.setSwingUtilities( swingUtils ); // or use constructor injection if you prefer
model.switchToInnerDirectory(1);
}

Related

What is good coding practice when it comes to structuring a JavaFX controller?

I am a student learning how to use JavaFX and I've got my first GUI working by using SceneBuilder and a Controller class. However, from my point of view the structure of the code in the controller looks incredibly messy and ugly because I put every event handler in the initialize() method of Controller. This makes it look like this:
#FXML
private void initialize() {
dealtHandLabel.setText("Your cards will be shown here.");
TextInputDialog userInput = new TextInputDialog();
userInput.setTitle("How many cards?");
userInput.setHeaderText("Enter how many cards you want to get.");
userInput.setContentText("No. of cards:");
//This makes it so that the button displays a hand of cards (of specified amount) when clicked
dealHand.setOnAction(event -> {
Optional<String> result = userInput.showAndWait();
if(result.isPresent()) {
int requestedAmount = Integer.parseInt(result.get());
StringBuilder sb = new StringBuilder();
cardHand = deck.dealHand(requestedAmount);
cardHand.forEach((card) -> sb.append(card.getAsString()).append(" "));
dealtHandLabel.setText(sb.toString());
}
});
//This button uses lambdas and streams to display requested information (sum, heart cards, etc.)
checkHand.setOnAction(event -> {
int cardSum = cardHand.stream().mapToInt(card -> card.getFace()).sum();
List<PlayingCard> spadeCards = cardHand.stream().filter((card) -> card.getSuit() == 'S').toList();
List<PlayingCard> heartCards = cardHand.stream().filter((card) -> card.getSuit() == 'H').toList();
List<PlayingCard> diamondCards = cardHand.stream().filter((card) -> card.getSuit() == 'D').toList();
List<PlayingCard> clubCards = cardHand.stream().filter((card) -> card.getSuit() == 'C').toList();
StringBuilder sb = new StringBuilder();
heartCards.forEach((card) -> sb.append(card.getAsString()).append(" "));
sumOfFacesField.setText(String.valueOf(cardSum));
heartCardsField.setText(sb.toString());
if(heartCards.size() >= 5 || diamondCards.size() >= 5 || spadeCards.size() >= 5 || clubCards.size() >= 5) {
flushField.setText("Yes");
}
else {
flushField.setText("No");
}
if(cardHand.stream().anyMatch((card) -> card.getAsString().equals("S12"))) {
spadesQueenField.setText("Yes");
}
else {
spadesQueenField.setText("No");
}
});
}
My lecturer does the exact same thing where he straight up puts every node handler into the initialize method, but I am not sure if this is good coding practice because it makes code harder to read from my point of view. Would it be better to put the different handlers into separate methods and connect them to the correct nodes using SceneBuilder, or is putting everything into initialize considered common coding practice among JavaFX developers?
This is an opinionated, perhaps even arbitrary decision, both approaches are OK.
There is nothing wrong with coding the event handlers in the initialize function versus referencing an event method handler from FXML.
These kinds of things are out of usually out of scope for StackOverflow, but I'll add some pointers and opinions anyway as it may help you or others regardless of StackOverflow policy.
Reference the actions in Scene Builder
Personally, I'd reference the action in Scene Builder:
Fill in a value for onAction or other events in the code panel section of Scene Builder UI for the highlighted node.
This will also add the reference in FXML, so you will have something like this, with the hashtag value for the onAction attribute:
<Button fx:id="saveButton" mnemonicParsing="false" onAction="#save" text="Save" />
Have Scene Builder generate a skeleton (View | Show Sample Skeleton). This will create a method signature to fill in, like this:
#FXML
void save(ActionEvent event) {
}
Then place the event handling code in there.
For that setup, IDEs such as Idea will do additional intelligent checks for consistency and allow element navigation between FXML and Java code, which can be nice, though that isn't really critical.
What follows is optional additional information on some important design decisions regarding JavaFX controllers.
Ignore this if it confuses you or is not relevant for your application (which is likely for a small study application).
Consider using MVC and a shared model
The more important design decision with regards to controllers is, usually, whether or not to use a shared model and MVC, MVP, or MVVM.
I'd encourage you to study that, research the acronyms, and look at the Eden coding MVC guide.
Consider using dependency injection
Also consider whether or not to use a dependency injection framework with the controllers, e.g. Spring integration (for a more complex app) or the clever eden injection pattern. You don't need to use these patterns, but they can help. Spring in particular is complex, and the integration with JavaFX is currently a bit tricky. I know both, so I would use them if the app called for it, but for others, it may not be a good combination.
Consider a business services layer
For medium to larger sized apps, in addition to having a separate model layer, try to separate the business logic out of the controller so the controller is just dealing with invoking functions on business services that manipulate the shared model and binding that model to the UI, rather than directly implementing the business logic in the controller.
This makes reusing, reasoning about, and testing the business logic easier. For smaller apps, the additional abstraction is not necessary and you can do the work in the controller.
Often such a handler will call an injected service that interacts with the shared model. If the updated data also needs to be persisted, then the injected service can also invoke a database access object or rest API client to persist the data.
Putting it all together
So to go back to the prior example, you might implement your save function in the controller like this:
public class UserController {
#FXML
TextField userName;
private UserService userService;
#Autowired
public void setUserService(UserService userService) {
userService = userService;
}
#FXML
void save(ActionEvent event) {
userService.saveUser(new User(userName.getText()));
}
}
Where the userService might reference a Spring WebFlux REST Client to persist the new user to a cloud-deployed REST service or maybe a Spring Data DAO to store the new user in a shared RDBMS database.
As noted, not all apps need this level of abstraction. Especially, the injection frameworks that are not required for small apps. And you can mix architectural styles within a given app, using shared models and services as appropriate and writing some smaller functions directly in the controller if you prefer to code that way. Just be careful if you do mix design patterns that it doesn't end up a jumbled mess :-)

Unit Testing a Public method with private supporting methods inside of it?

When trying to perform test driven development on my JSF app, I have a hard time understanding how to make my classes more testable and decoupled.. For instance:
#Test
public void testViewDocumentReturnsServletPath(){
DocumentDO doc = new DocumentDO();
doc.setID(7L);
doc.setType(“PDF”);
DocumentHandler dh = new DocumentHandler(doc);
String servletPath = dh.viewDocument();
assertTrue(servletPath, contains(“../../pdf?path=“);
}
This is only testable (with my current knowledge) if I remove some of the supporting private methods inside viewDocument() that are meant to interact with external resources like the DB.
How can I unit test the public API with these supporting private methods inside as well?
Unit testing typically includes mocking of external dependencies that a function relies on in order to get a controlled output. This means that if your private method makes a call to an API you can use a framework like Mockito to force a specific return value which you can then use to assure your code handles the value the way you expect. In Mockito for example, this would look like:
when(someApiCall).thenReturn(someResource);
This same structure holds if you wish to interact with a database or any other external resource that the method you are testing does not control.

Best practice to associate message and target class instance creation

The program I am working on has a distributed architecture, more precisely the Broker-Agent Pattern. The broker will send messages to its corresponding agent in order to tell the agent to execute a task. Each message sent contains the target task information(the task name, configuration properties needed for the task to perform etc.). In my code, each task in the agent side is implemente in a seperate class. Like :
public class Task1 {}
public class Task2 {}
public class Task3 {}
...
Messages are in JSON format like:
{
"taskName": "Task1", // put the class name here
"config": {
}
}
So what I need is to associate the message sent from the broker with the right task in the agent side.
I know one way is to put the target task class name in the message so that the agent is able to create an instance of that task class by the task name extracted from the message using reflections, like:
Class.forName(className).getConstructor(String.class).newInstance(arg);
I want to know what is the best practice to implement this association. The number of tasks is growing and I think to write string is easy to make mistakes and not easy to maintain.
If you're that specific about classnames you could even think about serializing task objects and sending them directly. That's probably simpler than your reflection approach (though even tighter coupled).
But usually you don't want that kind of coupling between Broker and Agent. A broker needs to know which task types there are and how to describe the task in a way that everybody understands (like in JSON). It doesn't / shouldn't know how the Agent implements the task. Or even in which language the Agent is written. (That doesn't mean that it's a bad idea to define task names in a place that is common to both code bases)
So you're left with finding a good way to construct objects (or call methods) inside your agent based on some string. And the common solution for that is some form of factory pattern like: http://alvinalexander.com/java/java-factory-pattern-example - also helpful: a Map<String, Factory> like
interface Task {
void doSomething();
}
interface Factory {
Task makeTask(String taskDescription);
}
Map<String, Factory> taskMap = new HashMap<>();
void init() {
taskMap.put("sayHello", new Factory() {
#Override
public Task makeTask(String taskDescription) {
return new Task() {
#Override
public void doSomething() {
System.out.println("Hello" + taskDescription);
}
};
}
});
}
void onTask(String taskName, String taskDescription) {
Factory factory = taskMap.get(taskName);
if (factory == null) {
System.out.println("Unknown task: " + taskName);
}
Task task = factory.makeTask(taskDescription);
// execute task somewhere
new Thread(task::doSomething).start();
}
http://ideone.com/We5FZk
And if you want it fancy consider annotation based reflection magic. Depends on how many task classes there are. The more the more effort to put into an automagic solution that hides the complexity from you.
For example above Map could be filled automatically by adding some class path scanning for classes of the right type with some annotation that holds the string(s). Or you could let some DI framework inject all the things that need to go into the map. DI in larger projects usually solves those kinds of issues really well: https://softwareengineering.stackexchange.com/questions/188030/how-to-use-dependency-injection-in-conjunction-with-the-factory-pattern
And besides writing your own distribution system you can probably use existing ones. (And reuse rather then reinvent is a best practice). Maybe http://www.typesafe.com/activator/template/akka-distributed-workers or more general http://twitter.github.io/finagle/ work in your context. But there are way too many other open source distributed things that cover different aspects to name all the interesting ones.

How to separate Swing GUI from Business Logic when Spring etc. is not used

please be advised, this is a long post. Sorry for that but I want to make my point clear:
I was wondering how to separate Swing GUI from Presentation and Business Logic for quite a long time.
At work I had to implement a 3 MD Excel Export for some data with a small Swing Dialog to configure the export.
We do not use a framework like Spring for this so I had to implement it myself.
I wanted to completely separate GUI from Business Logic, which are in precise following tasks:
Tell BL to start its job from GUI
Report Progress from BL to GUI
Report Logging from BL to GUI
Delegate BL Result to GUI
of course the GUI shouldnt have notice of the BL implementation and vice versa.
I created several interfaces for all those tasks above, e. g. a ProgressListener, LogMessageListener, JobDoneListener,
etc., to be fired by the Business Logic. For instance, if the Business Logic wants to tell about logging, it calls
fireLogListeners("Job has been started");
classes that implement the public interface LogListener + are attached to the BL, now will be notified about the "Job has been started" log message.
All these listeners are at this time implemented by the GUI itself, which in general looks like this:
public class ExportDialog extends JDialog implements ProgressListener, LogListener, JobFinishedListener, ErrorListener {
#Override
public void jobFinished(Object result){
// Create Save File dialog and save exported Data to file.
}
#Override
public void reportProgress(int steps){
progressBar.setValue(progressBar.getValue()+steps);
}
#Override
public void errorOccured(Exception ex, String additionalMessage){
ExceptionDialog dialog = new ExceptionDialog(additionalMessage, ex);
dialog.open();
}
// etc.
}
The "GUI and BL creating class" simply attaches the GUI (as all these listeners' interface) to the BL, which looks something like this:
exportJob.addProgressListener(uiDialog);
exportJob.addLogListener(uiDialog);
exportJob.addJobFinishedListener(uiDialog);
exportJob.start();
I am now quite unsure about that because it looks weird because of all those newly created listener Interfaces.
What do you think about?
How do you separate your Swing GUI components from BL?
Edit:
For better demonstrating purpose I created a Demo workspace in eclipse file-upload.net/download-9065013/exampleWorkspace.zip.html
I pasted it to pastebin also, but better import those classes in eclipse, pretty a lot of code http://pastebin.com/LR51UmMp
A few things.
I would no have the uiDialog code in the ExportFunction class. The whole perform method should just be code in the main class. The ExportFunctions responsibility is to 'export' not to 'show gui'.
public static void main(String[] args) {
ExportFunction exporter = new ExportFunction();
final ExportUIDialog uiDialog = new ExportUIDialog();
uiDialog.addActionPerformedListener(exporter);
uiDialog.pack();
uiDialog.setVisible(true);
}
(Swing.invokeLater() is not needed)
You seem to be overengineering a fair bit. I don't know why you would expect to have many threads to be running at the same time. When you press the button, you would only expect one thread to run right? Then there would be no need to have an array of actionPerformedListener.
instead of this :
button.addActionListener(new ActionListener() {
#Override
public void actionPerformed(ActionEvent arg0) {
if (startConditionsFulfilled()) {
fireActionListener(ActionPerformedListener.STARTJOB);
}
}
});
why not just :
final ExportJob exportJob = new ExportJob();
exportJob.addJobFinishedListener(this);
exportJob.addLogListener(this);
button.addActionListener(new ActionListener() {
#Override
public void actionPerformed(ActionEvent e) {
exportJob.start();
}
});
That way you can get rid of ExportFunction which doesn't really serve any purpose.
You seem to have a lot of arrays of listeners. Unless you really really need them I wouldn't bother with them and keep it as simple as possible.
Instead of :
Thread.sleep(1000);
fireLogListener("Excel Sheet 2 created");
Thread.sleep(1000);
Just have :
Thread.sleep(1000);
log("Excelt Sheet 1 created");
Thread.sleep(1000);
where log is :
private void log(final String message) {
((DefaultListModel<String>) list.getModel()).addElement(message);
}
This way you are keeping it simpler and cleaner.
GUI should not know about BL, but BL somehow has to tell the GUI what to do. You can abstract ad infinitum with lots of interfaces, but in 99.99% of applications this is not necessary, especially yours which seems fairly simple.
So while the code you have written is pretty good, i would try and simplify and reduce the interfaces. It doesn't warrant that much engineering.
Basically, your architecture seems ok to me. I suppose you wonder if it is because of the numerous listeners you set up.
A solution for this might be either:
a) to have a generic Event class, with subclasses for specific events.
You could use a visitor to implement the actual listeners.
b) to use an Event Bus (see guava, for instance).
With an event bus architecture, your model will publish events to the event bus,
and your UI objects will listen for events from the event bus, and filter them.
Some systems can even use annotations for declaring listener methods.

I can't unit test my class without exposing private fields -- is there something wrong with my design?

I have written some code which I thought was quite well-designed, but then I started writing unit tests for it and stopped being so sure.
It turned out that in order to write some reasonable unit tests, I need to change some of my variables access modifiers from private to default, i.e. expose them (only within a package, but still...).
Here is some rough overview of my code in question. There is supposed to be some sort of address validation framework, that enables address validation by different means, e.g. validate them by some external webservice or by data in DB, or by any other source. So I have a notion of Module, which is just this: a separate way to validate addresses. I have an interface:
interface Module {
public void init(InitParams params);
public ValidationResponse validate(Address address);
}
There is some sort of factory, that based on a request or session state chooses a proper module:
class ModuleFactory {
Module selectModule(HttpRequest request) {
Module module = chooseModule(request);// analyze request and choose a module
module.init(createInitParams(request)); // init module
return module;
}
}
And then, I have written a Module that uses some external webservice for validation, and implemented it like that:
WebServiceModule {
private WebServiceFacade webservice;
public void init(InitParams params) {
webservice = new WebServiceFacade(createParamsForFacade(params));
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
So basically I have this WebServiceFacade which is a wrapper over external web service, and my module calls this facade, processes its response and returns some framework-standard response.
I want to test if WebServiceModule processes reponses from external web service correctly. Obviously, I can't call real web service in unit tests, so I'm mocking it. But then again, in order for the module to use my mocked web service, the field webservice must be accessible from the outside. It breaks my design and I wonder if there is anything I could do about it. Obviously, the facade cannot be passed in init parameters, because ModuleFactory does not and should not know that it is needed.
I have read that dependency injection might be the answer to such problems, but I can't see how? I have not used any DI frameworks before, like Guice, so I don't know if it could be easily used in this situation. But maybe it could?
Or maybe I should just change my design?
Or screw it and make this unfortunate field package private (but leaving a sad comment like // default visibility to allow testing (oh well...) doesn't feel right)?
Bah! While I was writing this, it occurred to me, that I could create a WebServiceProcessor which takes a WebServiceFacade as a constructor argument and then test just the WebServiceProcessor. This would be one of the solutions to my problem. What do you think about it? I have one problem with that, because then my WebServiceModule would be sort of useless, just delegating all its work to another components, I would say: one layer of abstraction too far.
Yes, your design is wrong. You should do dependency injection instead of new ... inside your class (which is also called "hardcoded dependency"). Inability to easily write a test is a perfect indicator of a wrong design (read about "Listen to your tests" paradigm in Growing Object-Oriented Software Guided by Tests).
BTW, using reflection or dependency breaking framework like PowerMock is a very bad practice in this case and should be your last resort.
I agree with what yegor256 said and would like to suggest that the reason why you ended up in this situation is that you have assigned multiple responsibilities to your modules: creation and validation. This goes against the Single responsibility principle and effectively limits your ability to test creation separately from validation.
Consider constraining the responsibility of your "modules" to creation alone. When they only have this responsibility, the naming can be improved as well:
interface ValidatorFactory {
public Validator createValidator(InitParams params);
}
The validation interface becomes separate:
interface Validator {
public ValidationResponse validate(Address address);
}
You can then start by implementing the factory:
class WebServiceValidatorFactory implements ValidatorFactory {
public Validator createValidator(InitParams params) {
return new WebServiceValidator(new ProdWebServiceFacade(createParamsForFacade(params)));
}
}
This factory code becomes hard to unit-test, since it is explicitly referencing prod code, so keep this impl very concise. Put any logic (like createParamsForFacade) on the side, so that you can test it separately.
The web service validator itself only gets the responsibility of validation, and takes in the façade as a dependency, following the Inversion of Control (IoC) principle:
class WebServiceValidator implements Validator {
private final WebServiceFacade facade;
public WebServiceValidator(WebServiceFacade facade) {
this.facade = facade;
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
Since WebServiceValidator is not controlling the creation of its dependencies anymore, testing becomes a breeze:
#Test
public void aTest() {
WebServiceValidator validator = new WebServiceValidator(new MockWebServiceFacade());
...
}
This way you have effectively inverted the control of the creation of the dependencies: Inversion of Control (IoC)!
Oh, and by the way, write your tests first. This way you will naturally gravitate towards a testable solution, which is usually also the best design. I think that this is due to the fact that testing requires modularity, and modularity is coincidentally the hallmark of good design.

Categories

Resources