PowerMockito for testing MapReduce Code - java

I have the following reducer Code and i am trying to use PowerMock to test it .
package com.cerner.cdh.examples.reducer;
public class LinkReversalReducer extends TableReducer<Text, Text, ImmutableBytesWritable> {
#Override
protected void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
StringBuilder inlinks = new StringBuilder();
for (Text value : values) {
inlinks.append(value.toString());
inlinks.append(" ");
}
byte[] docIdBytes = Bytes.toBytes(key.toString());
Put put = new Put(docIdBytes);
put.add(WikiConstants.COLUMN_FAMILY_BYTES, WikiConstants.INLINKS_COLUMN_QUALIFIER_BYTES,
Bytes.toBytes(inlinks.toString().trim()));
context.write(new ImmutableBytesWritable(docIdBytes), put);
}
}
Below is the test i have written for the above:
#Test
public void testLinkReversalReducer() throws IOException, InterruptedException {
Text key = new Text("key");
#SuppressWarnings("rawtypes")
Context context = PowerMockito.mock(Context.class);
Iterable<Text> values = generateText();
StringBuilder inlinks = new StringBuilder();
for (Text value : values) {
inlinks.append(value);
inlinks.append(" ");
}
LinkReversalReducer reducer = new LinkReversalReducer();
byte[] docIdBytes = Bytes.toBytes(key.toString());
byte[] argument1 = WikiConstants.COLUMN_FAMILY_BYTES;
byte[] argument2 = WikiConstants.INLINKS_COLUMN_QUALIFIER_BYTES;
byte[] argument3 = Bytes.toBytes(inlinks.toString().trim());
Put put = new Put(docIdBytes);
put.add(argument1, argument2, argument3);
reducer.reduce(key, values, context);
Mockito.verify(context).write(new ImmutableBytesWritable(docIdBytes), put);
}
private List<Text> generateText() {
Text value = new Text("AB");
List<Text> texts = new ArrayList<Text>();
texts.add(value);
return texts;
}
}
So the thing is that my Mockito.verify(context).write(new ImmutableBytesWritable(docIdBytes), put); seems to get called with the right values in place and also my junit result shows that the Invoked and the Actual give the same response. But the test still seems to fail. Does anyone have a clue ? . Any help would be appreciated :)

The problem here is, that the Put class does not define an equals method. Therefore the verify method thinks that the actual Put passed to context.write inside yourLinkReversalReducer.reduce method is different to the expected Putassembled in your testLinkReversalReducer method.
To work around this problem you could do the following:
Mockito.verify(context).write(Mockito.eq(new ImmutableBytesWritable(docIdBytes)), MockitoHelper.eq(put));
...
class MockitoHelper {
public static Put eq(final Put expectedPut) {
return Mockito.argThat(new CustomTypeSafeMatcher<Put>(expectedPut.toString()) {
#Override
protected boolean matchesSafely(Put actualPut) {
return Bytes.equals(toBytes(expectedPut), toBytes(actualPut));
}
});
}
private static byte[] toBytes(Put put) {
ByteArrayDataOutput out = new ByteArrayDataOutput();
try {
put.write(out);
return out.toByteArray();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
}

Related

Java Hadoop wierd join behaviour

Aim
I have two csv files trying to make a join between them. One containing movieId, title and the other containing userId, movieId, comment-tag. I want to find out how many comments-tags each movie has, by printing title, comment_count. So my code:
Driver
public class Driver
{
public Driver(String[] args)
{
if (args.length < 3) {
System.err.println("input path ");
}
try {
Job job = Job.getInstance();
job.setJobName("movie tag count");
// set file input/output path
MultipleInputs.addInputPath(job, new Path(args[1]), TextInputFormat.class, TagMapper.class);
MultipleInputs.addInputPath(job, new Path(args[2]), TextInputFormat.class, MovieMapper.class);
FileOutputFormat.setOutputPath(job, new Path(args[3]));
// set jar class name
job.setJarByClass(Driver.class);
// set mapper and reducer to job
job.setReducerClass(Reducer.class);
// set output key class
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
int returnValue = job.waitForCompletion(true) ? 0 : 1;
System.out.println(job.isSuccessful());
System.exit(returnValue);
} catch (IOException | ClassNotFoundException | InterruptedException e) {
e.printStackTrace();
}
}
}
MovieMapper
public class MovieMapper extends org.apache.hadoop.mapreduce.Mapper<Object, Text, Text, Text>
{
#Override
protected void map(Object key, Text value, Context context) throws IOException, InterruptedException
{
String line = value.toString();
String[] items = line.split("(?!\\B\"[^\"]*),(?![^\"]*\"\\B)"); //comma not in quotes
String movieId = items[0].trim();
if(tryParseInt(movieId))
{
context.write(new Text(movieId), new Text(items[1].trim()));
}
}
private boolean tryParseInt(String s)
{
try {
Integer.parseInt(s);
return true;
} catch (NumberFormatException e) {
return false;
}
}
}
TagMapper
public class TagMapper extends org.apache.hadoop.mapreduce.Mapper<Object, Text, Text, Text>
{
#Override
protected void map(Object key, Text value, Context context) throws IOException, InterruptedException
{
String line = value.toString();
String[] items = line.split("(?!\\B\"[^\"]*),(?![^\"]*\"\\B)");
String movieId = items[1].trim();
if(tryParseInt(movieId))
{
context.write(new Text(movieId), new Text("_"));
}
}
private boolean tryParseInt(String s)
{
try {
Integer.parseInt(s);
return true;
} catch (NumberFormatException e) {
return false;
}
}
}
Reducer
public class Reducer extends org.apache.hadoop.mapreduce.Reducer<Text, Text, Text, IntWritable>
{
#Override
protected void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException
{
int noOfFrequency = 0;
Text movieTitle = new Text();
for (Text o : values)
{
if(o.toString().trim().equals("_"))
{
noOfFrequency++;
}
else
{
System.out.println(o.toString());
movieTitle = o;
}
}
context.write(movieTitle, new IntWritable(noOfFrequency));
}
}
The problem
The result I get is something like this:
title, count
_, count
title, count
title, count
_, count
title, count
_, count
How does this _ gets to be the key? I can't understand it. There is an if statment checking if there is an _ count it and don't put it as the title. Is there something wrong with the toString() method and the equals operation fails? Any ideas?
it is not weird because you iterate through values and o is a pointer to elements of values which is here are Text. at some point in time you make movieTitle to points to where o points movieTitle = o. in next iterations o points to "_" and also movieTitle points to "_".
if you change your code like this every thing works fine:
int noOfFrequency = 0;
Text movieTitle = null;
for (Text o : values)
{
if(o.toString().trim().equals("_"))
{
noOfFrequency++;
}
else
{
movieTitle = new Text(o.toString());
}
}
context.write(movieTitle, new IntWritable(noOfFrequency));

PIG Custom loader's getNext() is being called again and again

I have started working with Apache Pig for one of our projects. I have to create a custom input format to load our data files. For this, I followed this example Hadoop:Custom Input format. I also created my custom RecordReader implementation to read the data (we get our data in binary format from some other application) and parse that to proper JSON format.
The problem occurs when I use my custom loader in Pig script. As soon as my loader's getNext() method is invoked, it calls my custom RecordReader's nextKeyValue() method, which works fine. It reads the data properly, passes it back to my loader which parses the data and returns a Tuple. So far so good.
The problem arises when my loader's getNext() method is called again and again. It gets called, works fine, and returns the proper output (I debugged it till return statement). But then, instead of letting the execution go further, my loader gets called again. I tried to see the number of times my loader is called, and I could see the number go till 20K!
Can somebody please help me understand the problem in my code?
Loader
public class SimpleTextLoaderCustomFormat extends LoadFunc {
protected RecordReader in = null;
private byte fieldDel = '\t';
private ArrayList<Object> mProtoTuple = null;
private TupleFactory mTupleFactory = TupleFactory.getInstance();
#Override
public Tuple getNext() throws IOException {
Tuple t = null;
try {
boolean notDone = in.nextKeyValue();
if (!notDone) {
return null;
}
String value = (String) in.getCurrentValue();
byte[] buf = value.getBytes();
int len = value.length();
int start = 0;
for (int i = 0; i < len; i++) {
if (buf[i] == fieldDel) {
readField(buf, start, i);
start = i + 1;
}
}
// pick up the last field
readField(buf, start, len);
t = mTupleFactory.newTupleNoCopy(mProtoTuple);
mProtoTuple = null;
} catch (InterruptedException e) {
int errCode = 6018;
String errMsg = "Error while reading input";
e.printStackTrace();
throw new ExecException(errMsg, errCode,
PigException.REMOTE_ENVIRONMENT, e);
}
return t;
}
private void readField(byte[] buf, int start, int end) {
if (mProtoTuple == null) {
mProtoTuple = new ArrayList<Object>();
}
if (start == end) {
// NULL value
mProtoTuple.add(null);
} else {
mProtoTuple.add(new DataByteArray(buf, start, end));
}
}
#Override
public InputFormat getInputFormat() throws IOException {
//return new TextInputFormat();
return new CustomStringInputFormat();
}
#Override
public void setLocation(String location, Job job) throws IOException {
FileInputFormat.setInputPaths(job, location);
}
#Override
public void prepareToRead(RecordReader reader, PigSplit split)
throws IOException {
in = reader;
}
Custom InputFormat
public class CustomStringInputFormat extends FileInputFormat<String, String> {
#Override
public RecordReader<String, String> createRecordReader(InputSplit arg0,
TaskAttemptContext arg1) throws IOException, InterruptedException {
return new CustomStringInputRecordReader();
}
}
Custom RecordReader
public class CustomStringInputRecordReader extends RecordReader<String, String> {
private String fileName = null;
private String data = null;
private Path file = null;
private Configuration jc = null;
private static int count = 0;
#Override
public void close() throws IOException {
// jc = null;
// file = null;
}
#Override
public String getCurrentKey() throws IOException, InterruptedException {
return fileName;
}
#Override
public String getCurrentValue() throws IOException, InterruptedException {
return data;
}
#Override
public float getProgress() throws IOException, InterruptedException {
return 0;
}
#Override
public void initialize(InputSplit genericSplit, TaskAttemptContext context)
throws IOException, InterruptedException {
FileSplit split = (FileSplit) genericSplit;
file = split.getPath();
jc = context.getConfiguration();
}
#Override
public boolean nextKeyValue() throws IOException, InterruptedException {
InputStream is = FileSystem.get(jc).open(file);
StringWriter writer = new StringWriter();
IOUtils.copy(is, writer, "UTF-8");
data = writer.toString();
fileName = file.getName();
writer.close();
is.close();
System.out.println("Count : " + ++count);
return true;
}
}
Try this in Loader
//....
boolean notDone = ((CustomStringInputFormat)in).nextKeyValue();
//...
Text value = new Text(((CustomStringInputFormat))in.getCurrentValue().toString())

Hadoop mapper is never called, custom input format might be the issue

So I am doing a little test program just to get the hang of hadoops inputformat classes. I had a word search already built which took in lines as values and searched for the word line by line. I wanted to see if I could get hadoop to take in values word by word, hadoop doesn't seem to like that and keeps giving me results using the default mapper. My mappers initialize function is never even called.
I do know my record reader is called and that it is doing more or less what it is supposed to and I'm pretty sure the output of the record reader is what my mapper is searching for so why does hadoop decide not to call it?
Here is the relevant code
Input Format Class
public class WordReader extends FileInputFormat<Text, Text> {
#Override
public RecordReader<Text, Text> createRecordReader(InputSplit split,
TaskAttemptContext context) {
return new MyWholeFileReader();
}
}
Record Reader
public class MyWholeFileReader extends RecordReader<Text, Text> {
private long start;
private LineReader in;
private Text key = null;
private Text value = null;
private ArrayList<String> outputvalues;
public void initialize(InputSplit genericSplit,
TaskAttemptContext context) throws IOException {
outputvalues = new ArrayList<String>();
FileSplit split = (FileSplit) genericSplit;
Configuration job = context.getConfiguration();
start = split.getStart();
final Path file = split.getPath();
// open the file and seek to the start of the split
FileSystem fs = file.getFileSystem(job);
FSDataInputStream fileIn = fs.open(split.getPath());
in = new LineReader(fileIn, job);
if (key == null) {
key = new Text();
}
key.set(split.getPath().getName());
if (value == null) {
value = new Text();
}
}
public boolean nextKeyValue() throws IOException {
if (outputvalues.size() == 0) {
Text buffer = new Text();
int i = in.readLine(buffer);
String str = buffer.toString();
for (String vals : str.split(" ")) {
outputvalues.add(vals);
}
if (i == 0 || outputvalues.size() == 0) {
key = null;
value = null;
return false;
}
}
value.set(outputvalues.remove(0));
System.out.println(value.toString());
return true;
}
#Override
public Text getCurrentKey() {
return key;
}
#Override
public Text getCurrentValue() {
return value;
}
/**
*
* Get the progress within the split
*/
public float getProgress() {
return 0.0f;
}
public synchronized void close() throws IOException {
if (in != null) {
in.close();
}
}
}
Mapper
public class WordSearchMapper extends Mapper<Text, Text, OutputCollector<Text,IntWritable>, Reporter> {
static String keyword;
BloomFilter<String> b;
public void configure(JobContext jobConf) {
keyword = jobConf.getConfiguration().get("keyword");
System.out.println("keyword>> " + keyword);
b = new BloomFilter<String>(.01,10000);
b.add(keyword);
System.out.println(b.getExpectedBitsPerElement());
}
public void map(Text key, Text value, OutputCollector<Text,IntWritable> output,
Reporter reporter) throws IOException {
int wordPos;
System.out.println("value.toString()>> " + value.toString());
System.out.println(((FileSplit) reporter.getInputSplit()).getPath()
.getName());
String[] tokens = value.toString().split("[\\p{P} \\t\\n\\r]");
for (String st :tokens) {
if (b.contains(st)) {
if (value.toString().contains(keyword)) {
System.out.println("Found one");
wordPos = ((Text) value).find(keyword);
output.collect(value, new IntWritable(wordPos));
}
}
}
}
}
Driver:
public class WordSearch {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = new Job(conf,"WordSearch");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setMapperClass(WordSearchMapper.class);
job.setInputFormatClass( WordReader.class);
job.setOutputFormatClass(TextOutputFormat.class);
conf.set("keyword", "the");
FileInputFormat.setInputPaths(job, new Path("search.txt"));
FileOutputFormat.setOutputPath(job, new Path("outputs"+System.currentTimeMillis()));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
And I figured it out... this is why hadoop needs to stop supporting multiple versions of itself or why I should stop jamming multiple tutorials together. Turns out my mapper needs to be set up like this for the way my mapper and record reader are set up to interact.
'public class WordSearchMapper extends Mapper { static String keyword;`
I only realized this after looking at my imports and seeing that reporter was from package org.apache.hadoop.mapred as opposed to org.apache.hadoop.mapreduce –

Dcm4Che - getting images from pacs

I've got following problem. I have to write small application that connects to pacs and gets images. I decided to use dcm4che toolkit. I've written following code:
public class Dcm4 {
/**
* #param args the command line arguments
*/
public static void main(String[] args) {
// TODO code application logic here
DcmQR dcmqr = new MyDcmQR("server");
dcmqr.setCalledAET("server", true);
dcmqr.setRemoteHost("213.165.94.158");
dcmqr.setRemotePort(104);
dcmqr.getKeys();
dcmqr.setDateTimeMatching(true);
dcmqr.setCFind(true);
dcmqr.setCGet(true);
dcmqr.setQueryLevel(MyDcmQR.QueryRetrieveLevel.IMAGE);
dcmqr.addMatchingKey(Tag.toTagPath("PatientID"),"2011");
dcmqr.addMatchingKey(Tag.toTagPath("StudyInstanceUID"),"1.2.276.0.7230010.3.1.2.669896852.2528.1325171276.917");
dcmqr.addMatchingKey(Tag.toTagPath("SeriesInstanceUID"),"1.2.276.0.7230010.3.1.3.669896852.2528.1325171276.916");
dcmqr.configureTransferCapability(true);
List<DicomObject> result=null;
byte[] imgTab=null;
BufferedImage bImage=null;
try {
dcmqr.start();
System.out.println("started");
dcmqr.open();
System.out.println("opened");
result = dcmqr.query();
System.out.println("queried");
dcmqr.get(result);
System.out.println("List Size = " + result.size());
for(DicomObject dco:result){
System.out.println(dco);
dcmTools.toByteArray(dco);
System.out.println("end parsing");
}
} catch (Exception e) {
System.out.println("error "+e);
}
try{
dcmqr.stop();
dcmqr.close();
}catch (Exception e) {
}
System.out.println("done");
}
}
Everything seems to be fine until I call dcmTools.toByteArray(dco).
Output till calliing toByteArray() looks like this:
List Size = 1
(0008,0052) CS #6 [IMAGE] Query/Retrieve Level
(0008,0054) AE #6 [server] Retrieve AE Title
(0020,000E) UI #54 [1.2.276.0.7230010.3.1.3.669896852.2528.1325171276.916] Series Instance UID
Source of ToByteArray:
public static byte[] toByteArray(DicomObject obj) throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
BufferedOutputStream bos = new BufferedOutputStream(baos);
DicomOutputStream dos = new DicomOutputStream(bos);
dos.writeDicomFile(obj);
dos.close();
byte[] data = baos.toByteArray();
return data;
}
After calling toByteArray I got output:
error java.lang.IllegalArgumentException: Missing (0002,0010) Transfer Syntax UID
I,ve found some informations in other forums and it seems like DcmQR.get() method doesn't send imgage data. Is it possible to force DcmQR to do it. I've written that problem is in or with DcmQR.createStorageService() method but I haven't found the solution. Please help me!!!
Hello cneller!
I've made some changes you suggested: I've add setMoveDest and setStoreDestination and DicomObject are stored in destination I've added - it looks great. Then I've tried to write response handler based on FutureDimseRSP which is used in Association.cget method:
public class MyDimseRSP extends DimseRSPHandler implements DimseRSP{
private MyEntry entry = new MyEntry(null, null);
private boolean finished;
private int autoCancel;
private IOException ex;
#Override
public synchronized void onDimseRSP(Association as, DicomObject cmd,
DicomObject data) {
super.onDimseRSP(as, cmd, data);
MyEntry last = entry;
while (last.next != null)
last = last.next;
last.next = new MyEntry(cmd, data);
if (CommandUtils.isPending(cmd)) {
if (autoCancel > 0 && --autoCancel == 0)
try {
super.cancel(as);
} catch (IOException e) {
ex = e;
}
} else {
finished = true;
}
notifyAll();
}
#Override
public synchronized void onClosed(Association as) {
if (!finished) {
// ex = as.getException();
ex = null;
if (ex == null) {
ex = new IOException("Association to " + as.getRemoteAET()
+ " closed before receive of outstanding DIMSE RSP");
}
notifyAll();
}
}
public final void setAutoCancel(int autoCancel) {
this.autoCancel = autoCancel;
}
#Override
public void cancel(Association a) throws IOException {
if (ex != null)
throw ex;
if (!finished)
super.cancel(a);
}
public DicomObject getDataset() {
return entry.command;
}
public DicomObject getCommand() {
return entry.dataset;
}
public MyEntry getEntry() {
return entry;
}
public synchronized boolean next() throws IOException, InterruptedException {
if (entry.next == null) {
if (finished)
return false;
while (entry.next == null && ex == null)
wait();
if (ex != null)
throw ex;
}
entry = entry.next;
return true;
}
}
Here is MyEntry code:
public class MyEntry {
final DicomObject command;
final DicomObject dataset;
MyEntry next;
public MyEntry(DicomObject command, DicomObject dataset) {
this.command = command;
this.dataset = dataset;
}
public DicomObject getCommand() {
return command;
}
public DicomObject getDataset() {
return dataset;
}
public MyEntry getNext() {
return next;
}
public void setNext(MyEntry next) {
this.next = next;
}
}
Then I've retyped get method from Dmcqr as follows:
public void getObject(DicomObject obj, DimseRSPHandler rspHandler)throws IOException, InterruptedException{
TransferCapability tc = selectTransferCapability(qrlevel.getGetClassUids());
MyDimseRSP myRsp=new MyDimseRSP();
if (tc == null)
throw new NoPresentationContextException(UIDDictionary
.getDictionary().prompt(qrlevel.getGetClassUids()[0])
+ " not supported by " + remoteAE.getAETitle());
String cuid = tc.getSopClass();
String tsuid = selectTransferSyntax(tc);
DicomObject key = obj.subSet(MOVE_KEYS);
assoc.cget(cuid, priority, key, tsuid, rspHandler);
assoc.waitForDimseRSP();
}
In second argument in this method I've used an instance of my response handler (MyDimseRSP). And I run my code I got null value of command and dataset of my response handler. In "next" variable only "command" is not null, and od course it's not DicomObject which I need. What I'm doing wrong!!!!
You're going to have to step through the code a bit (including the DCM4CHE toolkit code). I suspect you are using the default response handler, which just counts the number of completed operations, and doesn't actually store the image data from the get command.
Clearly, your for loop, below, is looping over the results of the find operation, not the get (which needs to be handled in the response handler).
for(DicomObject dco:result)
I expect you will have to override the response handler to write your DICOM files appropriately. See also the DcmRcv class for writing DICOM files from the DicomObject you'll receive.
:
From your edits above, I assume you are just trying to get the raw DICOM instance data (not the command that stored it). What about a response handler roughly like:
List<DicomObject> dataList = new ArrayList<DicomObject>();
#Override
public void onDimseRSP(Association as, DicomObject cmd, DicomObject data) {
if( shouldAdd(as, cmd) ) {
dataList.add( data )
}
}
Watch out for large lists, but it should get you the data in memory.

I'm getting a Null pointer exception in my code because of the hashmap not being filled or maybe not

I was given this exercise:
Implement the following class that loads and prints a set of data values.
import java.util.Iterator;
public class MyLoader {
public void loadAndPrintValues(Iterator<String> keysToLoad, Data data, Printer printer) {
// Load data values like this:
// String value = data.loadValue(key);
// Print loaded data value like this:
// printer.printEntry(key, value);
}
}
However when I did the exercise I got a NullPointerException, probably from the while (keysToLoad.hasNext()) or from the key = keysToLoad.next();. I assume I got the exception because "data" was not getting filled but I can't figure out how to do it. Here is my code and error message
public interface Data{
//I made this method
public void makeEntry(String key, String value);
//given
public String loadValue(String key);
}
public interface Printer {
public void displayEntry(String key, String value);
}
import java.util.Iterator;
import java.util.HashMap;
public class MyLoader implements Data, Printer {
Data data; // = new MyLoader();
Printer printer; // = new MyLoader();
Iterator<String> iter; // = new MyLoader();
String key = "";
String value = "";
HashMap<String, String> ht = new HashMap<String, String>();
public MyLoader(){
// this.database = null;
// this.key = null;
// this.value = null;
// this.ht = null;
// this.iter = null;
System.out.println("now in the constructor");
}
public MyLoader(Iterator<String> iter, Data data, Printer printer){
this.data = data;
this.printer = printer;
this.iter = iter;
}
public void loadAndPrintValues(Iterator<String> keysToLoad, Data data, Printer printer) {
try {
if (ht.isEmpty()){
System.out.println("ht is empty");
throw new NullPointerException("Database is empty.");
}else {
this.data = data;
}
while (keysToLoad.hasNext()){
// Load data values like this:
key = keysToLoad.next();
value = data.loadValue(key);
// Print loaded data value like this:
printer.printEntry(key, value);
}
}catch (NullPointerException npe){
System.out.println("caught null pointer ");
System.out.println(npe.getMessage());
}
}
#Override
public void makeEntry(String key, String value){
ht.put(key, value);
}
#Override
public void printEntry(String key, String value) {
System.out.println("[" + key + " : " + value + "]");
}
#Override
public String loadValue(String key) {
System.out.println("loadValue:" + key);
if(this.ht.containsKey(key))
return this.ht.get(key);
else {
System.out.println("No key in database.");
throw new NullPointerException("No key in data.");
}
}
}
public class test {
public static void main(String[] args) {
try {
MyLoader mdbl = new MyLoader();
mdbl.makeEntry("0", "zero");
mdbl.makeEntry("1", "One");
mdbl.makeEntry("2", "Two");
mdbl.makeEntry("3", "Three");
mdbl.loadAndPrintValues(mdbl.iter, mdbl.data, mdbl.printer);
}
catch(NullPointerException e){
System.out.println(e.getMessage());
}
}
}
mdbl.iter is never assigned a value, i.e. is null (by default, since you are using the MyLoader constructor without arguments in which iter is not assigned). So when you pass it to a method that tries to do operations on it, you naturally get a NullPointerException (not an error).
You should not have
try{ /* ... */ } catch (NullPointerException npe){ /* ... */ }
blocks because any decent IDE will allow you to get to the line where the exception was thrown with one click, which is not the case if you catch the exception and simply print a message to System.out.
You are passing null values for Iterator, Data and Printer parameter of the method loadAndPrintValues() which will result in NullPointerException.
in this line
mdbl.loadAndPrintValues(mdbl.iter, mdbl.data, mdbl.printer);
you didn't initialize the mdbl.data and mdbl.printer
Never ever catch a NullPointerException. Remove the catch block with NullPointerException, run it and then post the stack trace.

Categories

Resources