How do I write the unit test for the following method - so that it is input file agnostic?
It seems that the reading and translation into business objects are distinct responsibilities - which need to be separate.
That would allow the business translation to be testable.
Any suggestions are welcome.
public Map<header,record> createTradeFeedRecords(String tradeFile,String config) throws Exception {
Map<header,record> feedRecordMap =
new LinkedHashMap<>();
try (BufferedReader reader = new BufferedReader(new FileReader(tradeFile))) {
for (String line; (line = reader.readLine()) != null;) {
if (line.trim().isEmpty() || line.startsWith("#") )
continue;
Record record = recordParser.extractTradeFeedRecord(line,config):
feedRecordMap .put(record.getHeader(), record) ;
}
} catch (Exception e) {
e.printStackTrace();
throw e;
} finally {
}
return feedRecordMap ;
}
You could use JUnit's TemporaryFolder rule (or, if using JUnit5, its equivalent extension) to create an input file for your test(s). You would then provide the path to this file in your tradeFile argument and your test would operate on the file you created. On completion of the test JUnit will discard the temporary folder, thereby adhering to the test principle of self containment.
This is, I think, the approach which most closely mirrors the actual behaviour of the createTradeFeedRecords method.
However, if you really don't want to play around with the file system in your tests or indeed if you just want to achieve this ..
It seems that the reading and translation into business objects are distinct responsibilities - which need to be separate.
... then you could extract the new FileReader(tradeFile) call behind an interface. Something like this, perhaps:
public interface TradeReader {
Reader read(String input);
}
The 'normal' implementation of this would be:
public class FileTradeReader implements TradeReader {
#Override
public Reader read(String input) {
return new FileReader(input);
}
}
You could then provide an implementation of this for use in your test case:
public class StubTradeReader implements TradeReader {
#Override
public Reader read(String input) {
return new StringReader(input);
}
}
In your tests you would then inject the class-under-test (i.e. the class which contains createTradeFeedRecords) with an instance of StubTradeReader. In this way, the createTradeFeedRecords method invoked within your tests would act upon whatever input you provided when creating the StubTradeReader and your tests would have no interaction with the file system.
You could also test the TradeReader separately (using the temporary folder approach outlined above, perhaps) thereby achieving the goal of separating reading and translating and testing both independently.
Related
What I want to do is to mock newly created instance of BufferedReader. Here's the code that should be tested:
A.java
...
#Override
public String read(String fileName) throws IOException {
...
try {
fileReader = new FileReader(fileName);
bufferedReader = new BufferedReader(fileReader);
String tmp;
StringBuilder builder = new StringBuilder();
while ((tmp = bufferedReader.readLine()) != null) {
builder.append(tmp);
}
return builder.toString();
} catch (IOException e) {
...
} finally {
...
}
}
...
What I have to do, is to PowerMock both FileReader creation and BufferedReader creation.
ATest.java
#RunWith(PowerMockRunner.class)
#PrepareForTest(A.class)
public class ATest {
#Mock
private FileReader fileReader;
#Mock
private BufferedReader bufferedReader;
...
#Test
public void test() throws Exception {
PowerMockito.whenNew(FileReader.class).withArguments(FILE_NAME).thenReturn(fileReader);
PowerMockito.whenNew(BufferedReader.class).withAnyArguments().thenReturn(bufferedReader);
PowerMockito.doAnswer(new Answer() {
public Object answer(InvocationOnMock invocation) throws Throwable {
return "test";
}
}).when(bufferedReader).readLine();
assertArrayEquals(reader.read(FILE_NAME), new String[]{"test"});
}
}
But then the test never terminates. I can't even debug it.
As soon as PowerMockito.doAnswer() is removed the code is executed (and is available for debugging). I've also tried to use Mockito.mock() instead of PowerMockito.doAnswer(), it doesn't help.
What may cause interminate execution of the test?
Just a different perspective: one could say that your real problem in your code are those two calls to new() for FileReader / BufferedReader.
What if you passed a Reader to this method; instead of a String denoting a file name?
What if you passed a "ReaderFactory" to the underlying class that contains this method read(String)? (where you would be using dependency injection to get that factory into your class)
Then: you would be looking at an improved design - and you would not need to use PowerMock. You could step back and go with Mockito or EasyMock; as there would be no more need to mock calls to new.
So, my answer is: you created hard-to-test code. Now you try to fix a design problem using the big (ugly) PowerMock hammer. Yes, that will work. But it is just the second best alternative.
The more reasonable option is to learn how to write testable code (start here for example); and well, write testable code. And stop using PowerMock ( I have done that many months back; after a lot of PowerMock-induced pain; and I have never ever regretted this decision ).
The problem was, that I also had to mock value after first bufferedReader.readLine(), because otherwise it would always return the mocked value, thus not terminating.
Mockito.when(bufferedReader.readLine()).thenReturn("first line").thenReturn(null);
NOTE
Though this is the actual answer to the question, but you should strongly consider choosing the design GhostCat has suggested in another answer (which I eventually did).
been a while since I have been here and am just trying to re-familiarize myself with my test automation framework I have been working on. Maybe a stupid question but I am going to throw it out there anyway as I think aloud.
Because I have introduced a config file which contains the path to an excel file(which contains test data) and implemented a basic excel reader to extract this data for testing I am finding that a great deal of my initial test is primarily taken up by all this set up.
For instance:
create an instance of a ReadPropertyFile class
Create an object of an ExcellDataConfig class and pass to it the location of the excel file from the config file
set the testcase id for this test to scan the excel file for where to start reading the data from the sheet - excel sheet contains markers
get the locations rol / col info from the sheet of all the interesting stuff i need for my test e.g. username / password, or some other data
open the browser
in the case of running a test on for multiple users set up a for loop that iterates through the excel sheet and logs in and then does the actual test.
Is a lot of configuration options but is there a simpler way?
I have a separate TestBase class which contains the login class and i thought to somehome move this user login info stuff there but not sure if that is such a good idea.
I just don't want to get bogged down duplicating work does anyone have any high level suggestions?
Here is a compilable (but not fully coded) quick-and-dirty example how you could design a base class for Selenium test classes. This follows the DRY principle (Don't Repeat Yourself).
The base class defines a login/logout method which would be called prior/after test execution of derived test classes.
Data is read from a Json file (based on javax.json) and used for locating elements (using the keys) and entering data (using the values). You can easily expand the code to support handling of other elements or location strategies (css, xpath).
Note that this example is not performance-optimised. But it is quick enough for a start and you could adapt it to your needs (e.g. eager data loading in a static context).
package myproject;
import java.io.*;
import java.util.*;
import javax.json.Json;
import javax.json.stream.JsonParser;
import javax.json.stream.JsonParser.Event;
import org.junit.*;
import org.openqa.selenium.*;
import org.openqa.selenium.support.ui.*;
public class MyProjectBaseTest {
protected static WebDriver driver;
#Before
public void before() {
WebDriver driver = new FirefoxDriver();
driver.get("http://myapp");
login();
}
#After
public void after() {
logout();
}
private void login() {
Map<String, String> data = readData("/path/to/testdata/login.json");
Set<String> keys = data.keySet();
for (String key : keys) {
WebDriverWait wait = new WebDriverWait(driver, 20L);
wait.until(ExpectedConditions.visibilityOfElementLocated(By.id(key)));
final WebElement we = driver.findElement(By.id(key));
if ("input".equals(we.getTagName())) {
we.clear();
we.sendKeys(data.get(key));
}
//else if "button".equals(we.getTagName())
}
}
private void logout() {
//logout code ...
}
private Map<String, String> readData(String filename) {
Map<String, String> data = new HashMap<String, String>();
InputStream is = null;
String key = null;
try {
is = new FileInputStream(filename);
JsonParser parser = Json.createParser(is);
while (parser.hasNext()) {
Event e = parser.next();
if (e == Event.KEY_NAME) {
key = parser.getString();
}
if (e == Event.VALUE_STRING) {
data.put(key, parser.getString());
}
}
parser.close();
}
catch (IOException e) {
//error handling
}
finally {
//close is
}
return data;
}
}
All this "setup work", you've described actually is pretty common stuff and it is how the AAA pattern really works:
a pattern for arranging and formatting code in UnitTest methods
For advanced Fixture usage you could utilize the most suitable for your case xUnit Setup pattern.
I totally agree with #Würgspaß's comment. What he is describing is called Object Map and I've used it heavily in the past 3 years with great success across multiple Automation projects.
I don't see in your scenario any usage of specific framework, so I would suggest that you pick some mature one, like TestNG in combination with Cucumber JVM. The last one will provide a Context injection, so you can get always clean step definition objects that can share context/state during the scenario run. And you will be able to reuse all the heavy setup just once and share it between all the tests. I/O operations are expensive and may cause issues in more complex cases, e.g. parallel execution of your tests.
As for the design of your code, you can find some of the Selenium's Test design considerations very useful, like the CallWrappers.
I have a text file and that file lists all the operations that can be performed on a Pump Class.
example of the content of text file
Start PayCredit Reject Start PayCredit Reject TurnOff
....
.... so on.
These are the methods of the Pump class(Start(), Reject() etc)
I need to write a code where I can Read these method from the file one by one and execute them.
public static void main(String[] args) throws IOException
{
Pump gp= new Pump();
File file=new File("C:\\Users\\Desktop\\checker\\check.txt");
BufferedReader br = new BufferedReader(new InputStreamReader(new FileInputStream(file)));
String line=null;
while((line=br.readLine())!=null)
{
String words[]=line.split(" ");
for(int i=0;i<words.length;i++)
{
String temp=words[i]+"()";
gp.temp; //compilation error
}
}
}
Could you tell me how can I achieve this functionality.
If you're not so familiar with reflection, maybe try using org.springframework.util.ReflectionUtils from the Spring Framework project?
The code would go something like this:
Pump gp = new Pump();
....
String temp = // from text file
....
Method m = ReflectionUtils.findMethod(Pump.class, temp);
Object result = ReflectionUtils.invokeMethod(m, gp);
You would need to use reflection to invoke the methods at runtime. Here is a simple example that assumes that all methods do not take any parameters.
Class<? extends Pump> pumpClass = gp.getClass();
String methodName = words[i];
Method toInvoke = pumpClass.getMethod(methodName);
if (null != toInvoke) {
toInvoke.invoke(gp);
}
First of all be aware that Java is not interpreted at runtime. So you can't do it this way.
If you already have the methods such as Start PayCredit Reject TurnOff and so on you can do it in the following way:
for(int i=0;i<words.length;i++)
{
String temp=words[i];
if (temp.equals("Start") gp.Start();
else if (temp.equals("PayCredit") gp.PayCredit();
...
}
use a switch case
for(int i=0;i<words.length;i++) {
String temp=words[i];
switch(temp) {
case "Start":
gp.start();
break;
case "PayCredit":
gp.PayCredit();
break;
}
}
You can use reflection to do this, e.g.
String line=null;
Method method = null;
while((line=br.readLine())!=null)
{
String words[]=line.split(" ");
for(int i=0;i<words.length;i++)
{
String temp=words[i];
method = getClass().getMethod(temp);
method.invoke(this);
}
}
That's assuming you want to call the method on this, of course, and that it's an instance method. Look at Class.getMethod and related methods, along with Method itself, for more details. You may want getDeclaredMethod instead, and you may need to make it accessible.
I would see if you can think of a way of avoiding this if possible though - reflection tends to get messy quickly. It's worth taking a step back and considering if this is the best design. If you give us more details of the bigger picture, we may be able to suggest alternatives.
I want to make a simple interative shell based on the console where I can write commands like login, help, et cetera.
I first thought of using Enums, but then I didn't know how to implement them neatly without a load of if-else statements, so I decided to go with an array-approach and came up with this:
public class Parser {
private static String[] opts = new String[] {"opt0", "opt1", "opt2", "opt3" ... }
public void parse(String text) {
for(int i = 0; i < opts.length; i++) {
if(text.matches(opts[i]) {
switch(i) {
case 0:
// Do something
case 1:
// Do something-something
case 2:
// Do something else
}
return;
}
}
}
}
But I ended up seeing that this was probably the most rudimentary way of doing something like this, and that there would be problems if I wanted to change the order of the options. How could I make a simpler parser? This way it would work, but it would also have said problems. The use of the program is purely educational, not intended for any serious thing.
A simple approach is to have a HashMap with the key equal to the command text and the value is an instance of class that handle this command. Assuming that the command handler class does not take arguments (but you can easily extend this) you can just use a Runnable instance.
Example code:
Runnable helpHandler = new Runnable() {
public void run(){
// handle the command
}
}
// Define all your command handlers
HashMap<String, Runnable> commandsMap = new HashMap<>(); // Java 7 syntax
commandsMap.put("help",helpHandler);
// Add all your command handlers instances
String cmd; // read the user input
Runnable handler;
if((handler = commandsMap.get(cmd)) != null) {
handler.run();
}
You can easily extend this approach to accept argument by implementing your own interface and subclass it. It is good to use variable arguments if you know the data type e.g. void execute(String ... args)
One solution that comes to mind is actually using Design patterns. You could use the input from the user, as the discriminator for a Factory class.
This factory class will generate an object, with an "execute" method, based on the input. This is called a Command object.
Then you can simply call the method of the object returned from the factory.
No need for a switch statement. If the object is null, then you know the user entered an invalid option, and it abstracts the decision logic away from your input parser.
Hopefully this will help :)
I have an interesting problem and would appreciate your thoughts for the best solution.
I need to parse a set of logs. The logs are produced by a multi-threaded program and a single process cycle produces several lines of logs.
When parsing these logs I need to pull out specific pieces of information from each process - naturally this information is across the multiple lines (I want to compress these pieces of data into a single line). Due to the application being multi-threaded, the block of lines belonging to a process can be fragmented as other processes at written to the same log file at the same time.
Fortunately, each line gives a process ID so I'm able to distinguish what logs belong to what process.
Now, there are already several parsers which all extend the same class but were designed to read logs from a single threaded application (no fragmentation - from original system) and use a readLine() method in the super class. These parsers will keep reading lines until all regular expressions have been matched for a block of lines (i.e. lines written in a single process cycle).
So, what can I do with the super class so that it can manage the fragmented logs, and ensure change to the existing implemented parsers is minimal?
It sounds like there are some existing parser classes already in use that you wish to leverage. In this scenario, I would write a decorator for the parser which strips out lines not associated with the process you are monitoring.
It sounds like your classes might look like this:
abstract class Parser {
public abstract void parse( ... );
protected String readLine() { ... }
}
class SpecialPurposeParser extends Parser {
public void parse( ... ) {
// ... special stuff
readLine();
// ... more stuff
}
}
And I would write something like:
class SingleProcessReadingDecorator extends Parser {
private Parser parser;
private String processId;
public SingleProcessReadingDecorator( Parser parser, String processId ) {
this.parser = parser;
this.processId = processId;
}
public void parse( ... ) { parser.parse( ... ); }
public String readLine() {
String text = super.readLine();
if( /*text is for processId */ ) {
return text;
}
else {
//keep readLine'ing until you find the next line and then return it
return this.readLine();
}
}
Then any occurrence you want to modify would be used like this:
//old way
Parser parser = new SpecialPurposeParser();
//changes to
Parser parser = new SingleProcessReadingDecorator( new SpecialPurposeParser(), "process1234" );
This code snippet is simple and incomplete, but gives you the idea of how the decorator pattern could work here.
I would write a simple distributor that reads the log file line by line and stores them in different VirtualLog objects in memory -- a VirtualLog being a kind of virtual file, actually just a String or something that the existing parsers can be applied to. The VirtualLogs are stored in a Map with the process ID (PID) as the key. When you read a line from the log, check if the PID is already there. If so, add the line to the PID's respective VirtualLog. If not, create a new VirtualLog object and add it to the Map. Parsers run as separate Threads, one on every VirtualLog. Every VirtualLog object is destroyed as soon as it has been completely parsed.
You need to store lines temporarily in a queue where a single thread consumes them and passes them on once each set has been completed. If you have no way of knowing the if a set is complete or not by either the number of lines or the content of the lines, you could consider using a sliding window technique where you don't collect the individual sets until after a certain time has passed.
One simple solution could be to read the file line by line and write several files, one for each process id. The list of process id's can be kept in a hash-map in memory to determine if a new file is needed or in which already created file the lines for a certain process id will go. Once all the (temporary) files are written, the existing parsers can do the job on each one.
Would something like this do it? It runs a new Thread for each Process ID in the log file.
class Parser {
String currentLine;
Parser() {
//Construct parser
}
synchronized String readLine(String processID) {
if (currentLine == null)
currentLine = readLinefromLog();
while (currentline != null && ! getProcessIdFromLine(currentLine).equals(processId)
wait();
String line = currentLine;
currentLine = readLinefromLog();
notify();
return line;
}
}
class ProcessParser extends Parser implements Runnable{
String processId;
ProcessParser(String processId) {
super();
this.processId = processId;
}
void startParser() {
new Thread(this).start();
}
public void run() {
String line = null;
while ((line = readLine()) != null) {
// process log line here
}
}
String readLine() {
String line = super.readLine(processId);
return line;
}