I have the following code in hadoop where the mapper and reducer are as follows:
public static class Map2 extends Mapper<LongWritable, Text, NullWritable, Text>
{
TreeMap<Text, Text> top10 = new TreeMap<Text, Text>();
HashMap<String, String> userInfo = new HashMap<String, String>();
public void setup(Context context) throws IOException, InterruptedException
{
try
{
URI[] uris = DistributedCache.getCacheFiles(context.getConfiguration());
FileSystem fs = FileSystem.get(context.getConfiguration());
if (uris == null || uris.length == 0)
{
throw new IOException("Error reading file from distributed cache. No URIs found.");
}
String path = "./users.dat";
fs.copyToLocalFile(new Path(uris[0]), new Path(path));
BufferedReader br = new BufferedReader(new FileReader(path));
String line = null;
while((line = br.readLine()) != null)
{
String split[] = line.split("\\::");
String age = split[2];
String gender = split[1];
userInfo.put(split[0], gender + "\t" + age);
}
br.close();
}
catch(Exception e)
{
}
}
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException
{
try
{
String line = value.toString();
int sum = Integer.parseInt(line.split("\\t")[1]);
String userID = line.split("\\t")[0];
String newKey = sum + " " + userID;
if(userInfo.containsKey(userID))
{
String record = userInfo.get(userID);
String val = userID + "\t" + record + "\t" + sum;
top10.put(new Text(newKey), new Text(val));
if (top10.size() > 10)
{
top10.remove(top10.firstKey());
}
}
}
catch(Exception e)
{
}
}
protected void cleanup(Context context) throws IOException, InterruptedException
{
try
{
for (Text s1 : top10.descendingMap().values())
{
context.write(NullWritable.get(), s1);
}
}
catch(Exception e)
{
}
}
}
public static class Reduce2 extends Reducer<NullWritable, Text, NullWritable, Text>
{
private TreeMap<Text, Text> top10 = new TreeMap<Text, Text>();
public void reduce(NullWritable key, Iterable<Text> values, Context context) throws IOException, InterruptedException
{
try
{
String line = values.toString();
String sum = line.split("\\t")[3];
String userID = line.split("\\t")[0];
String gender = line.split("\\t")[1];
String age = line.split("\\t")[2];
String newKey = sum + " " + userID;
String val = userID + "\t" + gender + "\t" + age + "\t" + sum;
top10.put(new Text(newKey), new Text(val));
if(top10.size() > 10)
{
top10.remove(top10.firstKey());
}
}
catch(Exception e)
{
}
}
protected void cleanup(Context context) throws IOException, InterruptedException
{
try
{
for (Text s1 : top10.descendingMap().values())
{
context.write(NullWritable.get(), s1);
}
}
catch(Exception e)
{
}
}
}
The driver method is as follows:
Configuration conf2 = new Configuration();
DistributedCache.addCacheFile(new Path("/Spring2014_HW-1/input_HW-1/users.dat").toUri(), conf2);
Job job2 = new Job(conf2, "Phase2");
job2.setOutputKeyClass(NullWritable.class);
job2.setOutputValueClass(Text.class);
job2.setJarByClass(MapSideJoin.class);
job2.setMapperClass(Map2.class);
job2.setReducerClass(Reduce2.class);
job2.setInputFormatClass(TextInputFormat.class);
job2.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job2, new Path(args[1]));
FileOutputFormat.setOutputPath(job2, new Path(args[2]));
//job2.setNumReduceTasks(1);
job2.waitForCompletion(true);
I get the message as map output records = 10 and reduce output records = 0, even though I've emit output from the reducer? Where does this output from the reducer disappear?
Thanks.
Related
Aim
I have two csv files trying to make a join between them. One containing movieId, title and the other containing userId, movieId, comment-tag. I want to find out how many comments-tags each movie has, by printing title, comment_count. So my code:
Driver
public class Driver
{
public Driver(String[] args)
{
if (args.length < 3) {
System.err.println("input path ");
}
try {
Job job = Job.getInstance();
job.setJobName("movie tag count");
// set file input/output path
MultipleInputs.addInputPath(job, new Path(args[1]), TextInputFormat.class, TagMapper.class);
MultipleInputs.addInputPath(job, new Path(args[2]), TextInputFormat.class, MovieMapper.class);
FileOutputFormat.setOutputPath(job, new Path(args[3]));
// set jar class name
job.setJarByClass(Driver.class);
// set mapper and reducer to job
job.setReducerClass(Reducer.class);
// set output key class
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
int returnValue = job.waitForCompletion(true) ? 0 : 1;
System.out.println(job.isSuccessful());
System.exit(returnValue);
} catch (IOException | ClassNotFoundException | InterruptedException e) {
e.printStackTrace();
}
}
}
MovieMapper
public class MovieMapper extends org.apache.hadoop.mapreduce.Mapper<Object, Text, Text, Text>
{
#Override
protected void map(Object key, Text value, Context context) throws IOException, InterruptedException
{
String line = value.toString();
String[] items = line.split("(?!\\B\"[^\"]*),(?![^\"]*\"\\B)"); //comma not in quotes
String movieId = items[0].trim();
if(tryParseInt(movieId))
{
context.write(new Text(movieId), new Text(items[1].trim()));
}
}
private boolean tryParseInt(String s)
{
try {
Integer.parseInt(s);
return true;
} catch (NumberFormatException e) {
return false;
}
}
}
TagMapper
public class TagMapper extends org.apache.hadoop.mapreduce.Mapper<Object, Text, Text, Text>
{
#Override
protected void map(Object key, Text value, Context context) throws IOException, InterruptedException
{
String line = value.toString();
String[] items = line.split("(?!\\B\"[^\"]*),(?![^\"]*\"\\B)");
String movieId = items[1].trim();
if(tryParseInt(movieId))
{
context.write(new Text(movieId), new Text("_"));
}
}
private boolean tryParseInt(String s)
{
try {
Integer.parseInt(s);
return true;
} catch (NumberFormatException e) {
return false;
}
}
}
Reducer
public class Reducer extends org.apache.hadoop.mapreduce.Reducer<Text, Text, Text, IntWritable>
{
#Override
protected void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException
{
int noOfFrequency = 0;
Text movieTitle = new Text();
for (Text o : values)
{
if(o.toString().trim().equals("_"))
{
noOfFrequency++;
}
else
{
System.out.println(o.toString());
movieTitle = o;
}
}
context.write(movieTitle, new IntWritable(noOfFrequency));
}
}
The problem
The result I get is something like this:
title, count
_, count
title, count
title, count
_, count
title, count
_, count
How does this _ gets to be the key? I can't understand it. There is an if statment checking if there is an _ count it and don't put it as the title. Is there something wrong with the toString() method and the equals operation fails? Any ideas?
it is not weird because you iterate through values and o is a pointer to elements of values which is here are Text. at some point in time you make movieTitle to points to where o points movieTitle = o. in next iterations o points to "_" and also movieTitle points to "_".
if you change your code like this every thing works fine:
int noOfFrequency = 0;
Text movieTitle = null;
for (Text o : values)
{
if(o.toString().trim().equals("_"))
{
noOfFrequency++;
}
else
{
movieTitle = new Text(o.toString());
}
}
context.write(movieTitle, new IntWritable(noOfFrequency));
According to map reduce programming model I wrote this program where Driver code is as follows
MY DRIVER CLASS
public class MRDriver extends Configured implements Tool
{
#Override
public int run(String[] strings) throws Exception {
if(strings.length != 2)
{
System.err.println("usage : <inputlocation> <inputlocation> <outputlocation>");
System.exit(0);
}
Job job = new Job(getConf(), "multiple files");
job.setJarByClass(MRDriver.class);
job.setMapperClass(MRMapper.class);
job.setReducerClass(MRReducer.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path(strings[0]));
FileOutputFormat.setOutputPath(job, new Path(strings[1]));
return job.waitForCompletion(true) ? 0 : 1;
//throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates.
}
public static void main(String[] args) throws Exception
{
Configuration conf = new Configuration();
System.exit(ToolRunner.run(conf, new MRDriver(), args));
}
}
MY MAPPER CLASS
class MRMapper extends Mapper<LongWritable, Text, Text, Text>
{
#Override
public void map(LongWritable key, Text value, Context context)
{
try
{
StringTokenizer iterator;
String idsimval = null;
iterator = new StringTokenizer(value.toString(), "\t");
String id = iterator.nextToken();
String sentival = iterator.nextToken();
if(iterator.hasMoreTokens())
idsimval = iterator.nextToken();
context.write(new Text("unique"), new Text(id + "_" + sentival + "_" + idsimval));
} catch (IOException | InterruptedException e)
{
System.out.println(e);
}
}
MY REDUCER CLASS
class MRReducer extends Reducer<Text, Text, Text, Text> {
String[] records;
HashMap<Long, String> sentiMap = new HashMap<>();
HashMap<Long, String> cosiMap = new HashMap<>();
private String leftIdStr;
private ArrayList<String> rightIDList, rightSimValList, matchingSimValList, matchingIDList;
private double leftVal;
private double rightVal;
private double currDiff;
private double prevDiff;
private int finalIndex;
Context newContext;
private int i;
public void reducer(Text key, Iterable<Text> value, Context context) throws IOException, InterruptedException {
for (Text string : value) {
records = string.toString().split("_");
sentiMap.put(Long.parseLong(records[0]), records[1]);
if (records[2] != null) {
cosiMap.put(Long.parseLong(records[0]), records[2]);
}
if(++i == 2588)
{
newContext = context;
newfun();
}
context.write(new Text("hello"), new Text("hii"));
}
context.write(new Text("hello"), new Text("hii"));
}
void newfun() throws IOException, InterruptedException
{
for (HashMap.Entry<Long, String> firstEntry : cosiMap.entrySet()) {
try {
leftIdStr = firstEntry.getKey().toString();
rightIDList = new ArrayList<>();
rightSimValList = new ArrayList<>();
matchingSimValList = new ArrayList<>();
matchingIDList = new ArrayList<>();
for (String strTmp : firstEntry.getValue().split(" ")) {
rightIDList.add(strTmp.substring(0, 18));
rightSimValList.add(strTmp.substring(19));
}
String tmp = sentiMap.get(Long.parseLong(leftIdStr));
if ("NULL".equals(tmp)) {
leftVal = Double.parseDouble("0");
} else {
leftVal = Double.parseDouble(tmp);
}
tmp = sentiMap.get(Long.parseLong(rightIDList.get(0)));
if ("NULL".equals(tmp)) {
rightVal = Double.parseDouble("0");
} else {
rightVal = Double.parseDouble(tmp);
}
prevDiff = Math.abs(leftVal - rightVal);
int oldIndex = 0;
for (String s : rightIDList) {
try {
oldIndex++;
tmp = sentiMap.get(Long.parseLong(s));
if ("NULL".equals(tmp)) {
rightVal = Double.parseDouble("0");
} else {
rightVal = Double.parseDouble(tmp);
}
currDiff = Math.abs(leftVal - rightVal);
if (prevDiff > currDiff) {
prevDiff = currDiff;
}
} catch (Exception e) {
}
}
oldIndex = 0;
for (String s : rightIDList) {
tmp = sentiMap.get(Long.parseLong(s));
if ("NULL".equals(tmp)) {
rightVal = Double.parseDouble("0");
} else {
rightVal = Double.parseDouble(tmp);
}
currDiff = Math.abs(leftVal - rightVal);
if (Objects.equals(prevDiff, currDiff)) {
matchingSimValList.add(rightSimValList.get(oldIndex));
matchingIDList.add(rightIDList.get(oldIndex));
}
oldIndex++;
}
finalIndex = rightSimValList.indexOf(Collections.max(matchingSimValList));
newContext.write(new Text(leftIdStr), new Text(" " + rightIDList.get(finalIndex) + ":" + rightSimValList.get(finalIndex)));
} catch (NumberFormatException nfe) {
}
}
}
}
What is the problem and does it belong to map reduce program or hadoop system configuration? Whenever I run this program, it only writes mapper ouput into hdfs.
Inside your Reducer class you must override the reduce method. You are declaring a reducer method, which is not correct.
Try modifying your function inside the Reducer class:
#Override
public void reduce(Text key, Iterable<Text> value, Context context) throws IOException, InterruptedException {
The map phase runs and then just quits without bothering with the reducer. The job alternately prints "Hello from mapper." and "Writing CellWithTotalAmount" and that's it. The output directory it creates is empty.
I've checked at least a dozen of other "reducer won't start" questions and have not found an answer. I've checked that the output of map is the same as input into reduce, that reduce uses Iterable, that correct output classes have been set, etc.
Job config
public class HoursJob {
public static void main(String[] args) throws Exception {
if (args.length != 2) {
System.err.println("Usage: HoursJob <input path> <output path>");
System.exit(-1);
}
Job job = Job.getInstance();
job.setJarByClass(HoursJob.class);
job.setJobName("Hours job");
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setMapperClass(HoursMapper.class);
job.setReducerClass(HoursReducer.class);
job.setMapOutputKeyClass(IntWritable.class);
job.setMapOutputValueClass(CellWithTotalAmount.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(NullWritable.class);
int ret = job.waitForCompletion(true) ? 0 : 1;
System.exit(ret);
}
}
Mapper
public class HoursMapper
extends Mapper<LongWritable, Text, IntWritable, CellWithTotalAmount> {
static double BEGIN_LONG = -74.913585;
static double BEGIN_LAT = 41.474937;
static double GRID_LENGTH = 0.011972;
static double GRID_HEIGHT = 0.008983112;
#Override
public void map(LongWritable key, Text value, Mapper.Context context)
throws IOException, InterruptedException {
System.out.println("Hello from mapper.");
String recordString = value.toString();
try {
DEBSFullRecord record = new DEBSFullRecord(recordString);
Date pickupDate = record.getPickup();
Calendar calendar = GregorianCalendar.getInstance();
calendar.setTime(pickupDate);
int pickupHour = calendar.get(Calendar.HOUR_OF_DAY);
int cellX = (int)
((record.getPickupLongitude() - BEGIN_LONG) / GRID_LENGTH) + 1;
int cellY = (int)
((BEGIN_LAT - record.getPickupLatitude()) / GRID_HEIGHT) + 1;
CellWithTotalAmount hourInfo =
new CellWithTotalAmount(cellX, cellY, record.getTotal());
context.write(new IntWritable(pickupHour), hourInfo);
} catch (Exception ex) {
System.out.println(
"Cannot parse: " + recordString + "due to the " + ex);
}
}
}
Reducer
public class HoursReducer
extends Reducer<IntWritable, CellWithTotalAmount, Text, NullWritable> {
#Override
public void reduce(IntWritable key, Iterable<CellWithTotalAmount> values,
Context context) throws IOException, InterruptedException {
System.out.println("Hello from reducer.");
int[][] cellRideCounters = getCellRideCounters(values);
CellWithRideCount cellWithMostRides =
getCellWithMostRides(cellRideCounters);
int[][] cellTotals = getCellTotals(values);
CellWithTotalAmount cellWithGreatestTotal =
getCellWithGreatestTotal(cellTotals);
String output = key + " "
+ cellWithMostRides.toString() + " "
+ cellWithGreatestTotal.toString();
context.write(new Text(output), NullWritable.get());
}
//omitted for brevity
}
Custom writable class
public class CellWithTotalAmount implements Writable {
public int cellX;
public int cellY;
public double totalAmount;
public CellWithTotalAmount(int cellX, int cellY, double totalAmount) {
this.cellX = cellX;
this.cellY = cellY;
this.totalAmount = totalAmount;
}
#Override
public void write(DataOutput out) throws IOException {
System.out.println("Writing CellWithTotalAmount");
out.writeInt(cellX);
out.writeInt(cellY);
out.writeDouble(totalAmount);
}
#Override
public void readFields(DataInput in) throws IOException {
System.out.println("Reading CellWithTotalAmount");
cellX = in.readInt();
cellY = in.readInt();
totalAmount = in.readDouble();
}
#Override
public String toString() {
return cellX + " " + cellY + " " + totalAmount;
}
}
I think there is a lot of exception in reduce function so Framework can not complete the job properly
public class HoursReducer
extends Reducer<IntWritable, CellWithTotalAmount, Text, NullWritable> {
#Override
public void reduce(IntWritable key, Iterable<CellWithTotalAmount> values,
Context context) throws IOException, InterruptedException {
System.out.println("Hello from reducer.");
try{
int[][] cellRideCounters = getCellRideCounters(values);
if(cellRideCounter[0].length>0){ // control it before executing it. more explanation is above
CellWithRideCount cellWithMostRides =
getCellWithMostRides(cellRideCounters);
int[][] cellTotals = getCellTotals(values);
CellWithTotalAmount cellWithGreatestTotal =
getCellWithGreatestTotal(cellTotals);
String output = key + " "
+ cellWithMostRides.toString() + " "
+ cellWithGreatestTotal.toString();
context.write(new Text(output), NullWritable.get());
}
}catch(Exception e)
e.printstack();
return;
{
}
}
add try-catch to get exceptions in reduce function
. Return from function in catch
. Also add an if statement before calling getCellWithMostRiders(..) I think the issue is in here. Fill the if statement as you want I made a guess and fill it according to my guess change it however you want if it is not proper for you
In the below code,inside reducer class reduce method is not executing. please help me.In my reduce method i want to write output in multiple files. so i have used multipleoutputs.
public class DataValidation {
public static class Map extends Mapper<LongWritable, Text, Text, Text> {
int flag = 1;
boolean result;
private HashMap<String, FileConfig> fileConfigMaps = new HashMap<String, FileConfig>();
private HashMap<String, List<LineValidator>> mapOfValidators = new HashMap<String, List<LineValidator>>();
private HashMap<String, List<Processor>> mapOfProcessors = new HashMap<String, List<Processor>>();
protected void setup(Context context) throws IOException {
System.out.println("configure inside map class");
ConfigurationParser parser = new ConfigurationParser();
Config config = parser.parse(new Configuration());
List<FileConfig> file = config.getFiles();
for (FileConfig f : file) {
try {
fileConfigMaps.put(f.getName(), f);
System.out.println("quotes in" + f.isQuotes());
System.out.println("file from xml : " + f.getName());
ValidationBuilder builder = new ValidationBuilder();
// ProcessorBuilder constructor = new ProcessorBuilder();
List<LineValidator> validators;
validators = builder.build(f);
// List<Processor> processors = constructor.build(f);
mapOfValidators.put(f.getName(), validators);
// mapOfProcessors.put(f.getName(),processors);
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
protected void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
// String filename = ((FileSplit) context.getInputSplit()).getPath()
// .getName();
FileSplit fs = (FileSplit) context.getInputSplit();
String fileName = fs.getPath().getName();
System.out.println("filename : " + fileName);
String line = value.toString();
String[] csvDataArray = null;
List<LineValidator> lvs = mapOfValidators.get(fileName);
flag = 1;
csvDataArray = line.split(",", -1);
FileConfig fc = fileConfigMaps.get(fileName);
System.out.println("filename inside fileconfig " + fc.getName());
System.out.println("quote values" + fc.isQuotes());
if (fc.isQuotes()) {
for (int i = 0; i < csvDataArray.length; i++) {
csvDataArray[i] = csvDataArray[i].replaceAll("\"", "");
}
}
for (LineValidator lv : lvs) {
if (flag == 1) {
result = lv.validate(csvDataArray, fileName);
if (result == false) {
String write = line + "," + lv.getFailureDesc();
System.out.println("write" + write);
System.out.println("key" + new Text(fileName));
// output.collect(new Text(filename), new Text(write));
context.write(new Text(fileName), new Text(write));
flag = 0;
if (lv.stopValidation(csvDataArray) == true) {
break;
}
}
}
}
}
protected void cleanup(Context context) {
System.out.println("clean up in mapper");
}
}
public static class Reduce extends Reducer<Text, Text, NullWritable, Text> {
protected void reduce(Text key, Iterator<Text> values, Context context)
throws IOException, InterruptedException {
System.out.println("inside reduce method");
while (values.hasNext()) {
System.out.println(" Nullwritable value" + NullWritable.get());
System.out.println("key inside reduce method" + key.toString());
context.write(NullWritable.get(), values.next());
// out.write(NullWritable.get(), values.next(), "/user/hadoop/"
// + context.getJobID() + "/" + key.toString() + "/part-");
}
}
}
public static void main(String[] args) throws Exception {
System.out.println("hello");
Configuration configuration = getConf();
Job job = Job.getInstance(configuration);
job.setJarByClass(DataValidation.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(NullWritable.class);
job.setOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}
private static Configuration getConf() {
return new Configuration();
}
}
You have not properly over-ridden reduce method. Use this:
public void reduce(Key key, Iterable values,
Context context) throws IOException, InterruptedException
I wrote a simple java application, I have a problem please help me;
I have a file (JUST EXAMPLE):
1.TXT
-------
SET MRED:NAME=MRED:0,MREDID=60;
SET BCT:NAME=BCT:0,NEPE=DCS,T2=5,DK0=KOR;
CREATE LCD:NAME=LCD:0;
-------
and this is my source code
import java.io.IOException;
import java.io.*;
import java.util.StringTokenizer;
class test1 {
private final int FLUSH_LIMIT = 1024 * 1024;
private StringBuilder outputBuffer = new StringBuilder(
FLUSH_LIMIT + 1024);
public static void main(String[] args) throws IOException {
test1 p=new test1();
String fileName = "i:\\1\\1.txt";
File file = new File(fileName);
BufferedReader br = new BufferedReader(new FileReader(file));
String line;
while ((line = br.readLine()) != null) {
StringTokenizer st = new StringTokenizer(line, ";|,");
while (st.hasMoreTokens()) {
String token = st.nextToken();
p.processToken(token);
}
}
p.flushOutputBuffer();
}
private void processToken(String token) {
if (token.startsWith("MREDID=")) {
String value = getTokenValue(token,"=");
outputBuffer.append("MREDID:").append(value).append("\n");
} else if (token.startsWith("DK0=")) {
String value = getTokenValue(token,"=");
outputBuffer.append("DK0=:").append(value).append("\n");
} else if (token.startsWith("NEPE=")) {
String value = getTokenValue(token,"=");
outputBuffer.append("NEPE:").append(value).append("\n");
}
if (outputBuffer.length() > FLUSH_LIMIT) {
flushOutputBuffer();
}
}
private String getTokenValue(String token,String find) {
int start = token.indexOf(find) + 1;
int end = token.length();
String value = token.substring(start, end);
return value;
}
private void flushOutputBuffer() {
System.out.print(outputBuffer);
outputBuffer = new StringBuilder(FLUSH_LIMIT + 1024);
}
}
I want this output :
MREDID:60
DK0=:KOR
NEPE:DCS
But this application show me this :
MREDID:60
NEPE:DCS
DK0=:KOR
please tell me how can i handle this , because of that DK0 must be at first and this is just a sample ; my real application has 14000 lines
Thanks ...
Instead of outputting the value when you read it, put it in a hashmap. Once you've read your entire file, output in the order you want by getting the values from the hashmap.
Use a HashTable to store the values and print from it in the desired order after parsing all tokens.
//initialize hash table
HashTable ht = new HashTable();
//instead of outputBuffer.append, put the values in to the table like
ht.put("NEPE", value);
ht.put("DK0", value); //etc
//print the values after the while loop
System.out.println("MREDID:" + ht.get("MREDID"));
System.out.println("DK0:" + ht.get("DK0"));
System.out.println("NEPE:" + ht.get("NEPE"));
Create a class, something like
class data {
private int mredid;
private String nepe;
private String dk0;
public void setMredid(int mredid) {
this.mredid = mredid;
}
public void setNepe(String nepe) {
this.nepe = nepe;
}
public void setDk0(String dk0) {
this.dk0 = dk0;
}
public String toString() {
String ret = "MREDID:" + mredid + "\n";
ret = ret + "DK0=:" + dk0 + "\n";
ret = ret + "NEPE:" + nepe + "\n";
}
Then change processToken to
private void processToken(String token) {
Data data = new Data();
if (token.startsWith("MREDID=")) {
String value = getTokenValue(token,"=");
data.setMredid(Integer.parseInt(value));
} else if (token.startsWith("DK0=")) {
String value = getTokenValue(token,"=");
data.setDk0(value);
} else if (token.startsWith("NEPE=")) {
String value = getTokenValue(token,"=");
data.setNepe(value);
}
outputBuffer.append(data.toString());
if (outputBuffer.length() > FLUSH_LIMIT) {
flushOutputBuffer();
}
}