serialize array in hadoop - java

I want to serialize a stringarray "textData" and send it from mapper to reducer
public void map(LongWritable key, Text value, OutputCollector< IntWritable,Text >
output, Reporter reporter) throws IOException {
Path pt=new Path("E:\\spambase.txt");
FileSystem fs = FileSystem.get(new Configuration());
BufferedReader textReader=new BufferedReader(new InputStreamReader(fs.open(pt)));
int numberOfLines = readLines( );
String[ ] textData = new String[numberOfLines];
int i;
for (i=0; i < numberOfLines; i++) {
textData[ i ] = textReader.readLine();
}
textReader.close();

You seem to have some misunderstanding about how the MapReduce process works.
The mapper should ideally not read an entire file within itself.
A Job object generates a collection of InputSplits for a given input path.
By default, Hadoop reads one line of each split in the path (the input can be a directory), or just of the given file.
Each line is passed one at a time into Text value of your map class at the LongWritable key offset of the input.
Its not clear what you are trying to output, but you're looking for the ArrayWritable class and you serialize data to a reducer using output.collect(). However you need to modify your mapper output types from IntWritable, Text to use output.collect(some_key, new ArrayWritable(textData))
It's worth pointing out that you're using the deprecated mapred libraries, not the mapreduce ones. And that E:\\ is not an hdfs path, but a local filesystem.

Related

Map Reduce program to merge multiple xml files to a single xml file

I am new to Hindsight & Hadoop map reduce concept. I am trying to merge multiple XML files to a single XML file using map reduce program. My intention is to merge each XML file into a destination XML file by prepending and appending file name as start and end tag.
For eg. the below XML's should be merged into a single XML shown below
Input XML Files
<xml><a></a></xml>
<xml><b></b></xml>
<xml><c></c></xml>
Output XML File
<xml>
<File1Name><xml><a></a></xml><File2Name>
<File2Name><xml><b></b></xml><File3Name>
<File3Name><xml><c></c></xml><File3Name>
<xml>
Question 1: Is it possible to map a XML file to each mapper and create a key value pair, key as a file name and value as an each XML file prepending and appending file name as start and end tags and reducer to merge all XML's to a single context and output to XML shown above.
Question 2: How can i get file name as key in mapper code?
Answer 1:
I don't suggest sending just a single XML to a mapper unless the files are over 1gb a piece. You can send a list of xml locations to your mapper and then in your mapper code open each location and extract the data into your output.
Answer 2:
If using azure blob storage, you could list all the blobs in a container and assign them to the input split.
How to create your list of InputSplits:
ArrayList<InputSplit> ret = new ArrayList<InputSplit>();
/*Do this for each path we receive. Creates a directory of splits in this order s = input path (S1,1),(s2,1)…(sN,1),(s1,2),(sN,2),(sN,3) etc..
*/
for (int i = numMinNameHashSplits; i <= Math.min(numMaxNameHashSplits,numNameHashSplits–1); i++) {
for (Path inputPath : inputPaths) {
ret.add(new ParseDirectoryInputSplit(inputPath.toString(), i));
System.out.println(i + ” “+inputPath.toString());
}
}
return ret;
}
}
Once the List<InputSplits> is assembled, each InputSplit is handed to a Record Reader class where each Key, Value, pair is read then passed to the map task. The initialization of the recordreader class uses the InputSplit, a string representing the location of a “folder” of invoices in blob storage, to return a list of all blobs within the folder, the blobs variable below. The below Java code demonstrates the creation of the record reader for each hashslot and the resulting list of blobs in that location.
Public class ParseDirectoryFileNameRecordReader
extends RecordReader<IntWritable, Text> {
private int nameHashSlot;
private int numNameHashSlots;
private Path myDir;
private Path currentPath;
private Iterator<ListBlobItem> blobs;
private int currentLocation;
public void initialize(InputSplit split, TaskAttemptContext context)
throws IOException, InterruptedException {
myDir = ((ParseDirectoryInputSplit)split).getDirectoryPath();
//getNameHashSlot tells us which slot this record reader is responsible for
nameHashSlot = ((ParseDirectoryInputSplit)split).getNameHashSlot();
//gets the total number of hashslots
numNameHashSlots = getNumNameHashSplits(context.getConfiguration());
//gets the input credientals to the storage account assigned to this record reader.
String inputCreds = getInputCreds(context.getConfiguration());
//break the directory path to get account name
String[] authComponents = myDir.toUri().getAuthority().split(“#”);
String accountName = authComponents[1].split(“\\.”)[0];
String containerName = authComponents[0];
String accountKey = Utils.returnInputkey(inputCreds, accountName);
System.out.println(“This mapper is assigned the following account:”+accountName);
StorageCredentials creds = new StorageCredentialsAccountAndKey(accountName,accountKey);
CloudStorageAccount account = new CloudStorageAccount(creds);
CloudBlobClient client = account.createCloudBlobClient();
CloudBlobContainer container = client.getContainerReference(containerName);
blobs = container.listBlobs(myDir.toUri().getPath().substring(1) + “/”, true,EnumSet.noneOf(BlobListingDetails.class), null,null).iterator();
currentLocation = –1;
return;
}
Once initialized, the record reader is used to pass the next key to the map task. This is controlled by the nextKeyValue method, and it is called every time map task starts. The blow Java code demonstrates this.
//This checks if the next key value is assigned to this task or is assigned to another mapper. If it assigned to this task the location is passed to the mapper, otherwise return false
#Override
public boolean nextKeyValue() throws IOException, InterruptedException {
while (blobs.hasNext()) {
ListBlobItem currentBlob = blobs.next();
//Returns a number between 1 and number of hashslots. If it matches the number assigned to this Mapper and its length is greater than 0, return the path to the map function
if (doesBlobMatchNameHash(currentBlob) && getBlobLength(currentBlob) > 0) {
String[] pathComponents = currentBlob.getUri().getPath().split(“/”);
String pathWithoutContainer =
currentBlob.getUri().getPath().substring(pathComponents[1].length() + 1);
currentPath = new Path(myDir.toUri().getScheme(), myDir.toUri().getAuthority(),pathWithoutContainer);
currentLocation++;
return true;
}
}
return false;
}
The logic in the map function is than simply as follows, with inputStream containing the entire XML string
Path inputFile = new Path(value.toString());
FileSystem fs = inputFile.getFileSystem(context.getConfiguration());
//Input stream contains all data from the blob in the location provided by Text
FSDataInputStream inputStream = fs.open(inputFile);
Resources:
http://www.andrewsmoll.com/3-hacks-for-hadoop-and-hdinsight-clusters/ "Hack 3"
http://blogs.msdn.com/b/mostlytrue/archive/2014/04/10/merging-small-files-on-hdinsight.aspx

Change the default delimiter of the mapreduce

Hi I am a beginner to MapReduce, and I want to program the WordCount so it output the K/V pairs. But the question is I don't want to use the 'tab' as the key value pair delimiter for the file. How could I change it?
The code I use is slightly different from the example one. Here is the driver class.
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "Job1");
job.setJarByClass(Simpletask.class);
job.setMapperClass(TokenizerMapper.class);
//job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
LazyOutputFormat.setOutputFormatClass(job, TextOutputFormat.class);
Since I want the file name to be respective with the partition of the reducer, I use multipleout.write() in the reduce function, and thus the code is slightly different.
public void reduce(IntWritable key,Iterable<Text> values, Context context) throws IOException, InterruptedException {
String accu = "";
for (Text val : values) {
String[] entry=val.toString().split(",");
String MBR = entry[1];
//ASSUME MBR IS ENTRY 1. IT CAN BE REPLACED BY INVOKING FUNCTION TO CALCULATE MBR([COORDINATES])
String mes_line = entry[0]+",MBR"+MBR+" ";
result.set(mes_line);
mos.write(key, result, generateFileName(key));
}
Any help will be appreciated! Thank you!
Since you are using FileInputFormat the key is the line offset in the file, and the value is a line from the input file. It's upto the mapper to split the input line with any delimiter. You can use it to split the record read in map method. The default behavior comes with a specific input format like TextInputFormat etc.

Reading Data From FTP Server in Hadoop/Cascading

I want to read data from FTP Server.I am providing path of the file which resides on FTP server in the format ftp://Username:Password#host/path.
When I use map reduce program to read data from file it works fine. I want to read data from same file through Cascading framework. I am using Hfs tap of cascading framework to read data. It throws following exception
java.io.IOException: Stream closed
at org.apache.hadoop.fs.ftp.FTPInputStream.close(FTPInputStream.java:98)
at java.io.FilterInputStream.close(Unknown Source)
at org.apache.hadoop.util.LineReader.close(LineReader.java:83)
at org.apache.hadoop.mapred.LineRecordReader.close(LineRecordReader.java:168)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.close(MapTask.java:254)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:440)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
Below is the code of cascading framework from where I am reading the files:
public class FTPWithHadoopDemo {
public static void main(String args[]) {
Tap source = new Hfs(new TextLine(new Fields("line")), "ftp://user:pwd#xx.xx.xx.xx//input1");
Tap sink = new Hfs(new TextLine(new Fields("line1")), "OP\\op", SinkMode.REPLACE);
Pipe pipe = new Pipe("First");
pipe = new Each(pipe, new RegexSplitGenerator("\\s+"));
pipe = new GroupBy(pipe);
Pipe tailpipe = new Every(pipe, new Count());
FlowDef flowDef = FlowDef.flowDef().addSource(pipe, source).addTailSink(tailpipe, sink);
new HadoopFlowConnector().connect(flowDef).complete();
}
}
I tried to look in Hadoop Source code for the same exception. I found that in the MapTask class there is one method runOldMapper which deals with stream. And in the same method there is finally block where stream gets closed (in.close()). When I remove that line from finally block it works fine. Below is the code:
private <INKEY, INVALUE, OUTKEY, OUTVALUE> void runOldMapper(final JobConf job, final TaskSplitIndex splitIndex,
final TaskUmbilicalProtocol umbilical, TaskReporter reporter)
throws IOException, InterruptedException, ClassNotFoundException {
InputSplit inputSplit = getSplitDetails(new Path(splitIndex.getSplitLocation()), splitIndex.getStartOffset());
updateJobWithSplit(job, inputSplit);
reporter.setInputSplit(inputSplit);
RecordReader<INKEY, INVALUE> in = isSkipping()
? new SkippingRecordReader<INKEY, INVALUE>(inputSplit, umbilical, reporter)
: new TrackedRecordReader<INKEY, INVALUE>(inputSplit, job, reporter);
job.setBoolean("mapred.skip.on", isSkipping());
int numReduceTasks = conf.getNumReduceTasks();
LOG.info("numReduceTasks: " + numReduceTasks);
MapOutputCollector collector = null;
if (numReduceTasks > 0) {
collector = new MapOutputBuffer(umbilical, job, reporter);
} else {
collector = new DirectMapOutputCollector(umbilical, job, reporter);
}
MapRunnable<INKEY, INVALUE, OUTKEY, OUTVALUE> runner = ReflectionUtils.newInstance(job.getMapRunnerClass(),
job);
try {
runner.run(in, new OldOutputCollector(collector, conf), reporter);
collector.flush();
} finally {
// close
in.close(); // close input
collector.close();
}
}
please assist me in solving this problem.
Thanks,
Arshadali
After some efforts I found out that hadoop uses org.apache.hadoop.fs.ftp.FTPFileSystem Class for FTP.
This class doesn't supports seek, i.e. Seek to the given offset from the start of the file. Data is read in one block and then file system seeks to next block to read. Default block size is 4KB for FTPFileSystem. As seek is not supported it can only read data less than or equal to 4KB.

Java: CSV File Easy Read/Write

I'm working on a program that requires quick access to a CSV comma-delimited spreadsheet file.
So far I've been able to read from it easily using a BufferedReader.
However, now I want to be able to edit the data it reads, then export it BACK to the CSV.
The spreadsheet contains names, phone numbers, email addresses, etc. And the program lists everyone's data, and when you click on them it brings up a page with more detailed information, also pulled from the CSV. On that page you can edit the data, and I want to be able to click a "Save Changes" button, then export the data back to its appropriate line in the CSV--or delete the old one, and append the new.
I'm not very familiar with using a BufferedWriter, or whatever it is I should be using.
What I started to do is create a custom class called FileIO. It contains both a BufferedReader and a BufferedWriter. So far it has a method that returns bufferedReader.readLine(), called read(). Now I want a function called write(String line).
public static class FileIO {
BufferedReader read;
BufferedWriter write;
public FileIO (String file) throws MalformedURLException, IOException {
read = new BufferedReader(new InputStreamReader (getUrl(file).openStream()));
write = new BufferedWriter (new FileWriter (file));
}
public static URL getUrl (String file) throws IOException {
return //new URL (fileServer + file).openStream()));
FileIO.class.getResource(file);
}
public String read () throws IOException {
return read.readLine();
}
public void write (String line) {
String [] data = line.split("\\|");
String firstName = data[0];
// int lineNum = findLineThatStartsWith(firstName);
// write.writeLine(lineNum, line);
}
};
I'm hoping somebody has an idea as to how I can do this?
Rather than reinventing the wheel you could have a look at OpenCSV which supports reading and writing of CSV files. Here are examples of reading & writing
Please consider Apache commons csv.
To fast understand the api, there are four important classes:
CSVFormat
Specifies the format of a CSV file and parses input.
CSVParser
Parses CSV files according to the specified format.
CSVPrinter
Prints values in a CSV format.
CSVRecord
A CSV record parsed from a CSV file.
Code Example:
Unit test code:
The spreadsheet contains names, phone numbers, email addresses, etc. And the program lists everyone's data, and when you click on them it brings up a page with more detailed information, also pulled from the CSV. On that page you can edit the data, and I want to be able to click a "Save Changes" button, then export the data back to its appropriate line in the CSV--or delete the old one, and append the new.
The content of a file is a sequence of bytes. CSV is a text based file format, i.e. the sequence of byte is interpreted as a sequence of characters, where newlines are delimited by special newline characters.
Consequently, if the length of a line increases, the characters of all following lines need to be moved to make room for the new characters. Likewise, to delete a line you must move the later characters to fill the gap. That is, you can not update a line in a csv (at least not when changing its length) without rewriting all following lines in the file. For simplicity, I'd rewrite the entire file.
Since you already have code to write and read the CSV file, adapting it should be straightforward. But before you do that, it might be worth asking yourself if you're using the right tool for the job. If the goal is to keep a list of records, and edit individual records in a form, programs such as Microsoft Access or whatever the Open Office equivalent is called might be a more natural fit. If you UI needs go beyond what these programs provide, using a relational database to keep your data is probably a better fit (more efficient and flexible than a CSV).
Add Dependencies
implementation 'com.opencsv:opencsv:4.6'
Add Below Code in onCreate()
InputStreamReader is = null;
try {
String path= "storage/emulated/0/Android/media/in.bioenabletech.imageProcessing/MLkit/countries_image_crop.csv";
CSVReader reader = new CSVReader(new FileReader(path));
String[] nextLine;
int lineNumber = 0;
while ((nextLine = reader.readNext()) != null) {
lineNumber++;
//print CSV file according to your column 1 means first column, 2 means
second column
Log.e(TAG, "onCreate: "+nextLine[2] );
}
}
catch (Exception e)
{
Log.e(TAG, "onCreate: "+e );
}
I solved it using
<dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-csv</artifactId>
<version>2.8.6</version>
</dependency>
and
private static final CsvMapper mapper = new CsvMapper();
public static <T> List<T> readCsvFile(MultipartFile file, Class<T> clazz) throws IOException {
InputStream inputStream = file.getInputStream();
CsvSchema schema = mapper.schemaFor(clazz).withHeader().withColumnReordering(true);
ObjectReader reader = mapper.readerFor(clazz).with(schema);
return reader.<T>readValues(inputStream).readAll();
}

How to convert .txt file to Hadoop's sequence file format

To effectively utilise map-reduce jobs in Hadoop, i need data to be stored in hadoop's sequence file format. However,currently the data is only in flat .txt format.Can anyone suggest a way i can convert a .txt file to a sequence file?
So the way more simplest answer is just an "identity" job that has a SequenceFile output.
Looks like this in java:
public static void main(String[] args) throws IOException,
InterruptedException, ClassNotFoundException {
Configuration conf = new Configuration();
Job job = new Job(conf);
job.setJobName("Convert Text");
job.setJarByClass(Mapper.class);
job.setMapperClass(Mapper.class);
job.setReducerClass(Reducer.class);
// increase if you need sorting or a special number of files
job.setNumReduceTasks(0);
job.setOutputKeyClass(LongWritable.class);
job.setOutputValueClass(Text.class);
job.setOutputFormatClass(SequenceFileOutputFormat.class);
job.setInputFormatClass(TextInputFormat.class);
TextInputFormat.addInputPath(job, new Path("/lol"));
SequenceFileOutputFormat.setOutputPath(job, new Path("/lolz"));
// submit and wait for completion
job.waitForCompletion(true);
}
import java.io.IOException;
import java.net.URI;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.SequenceFile;
import org.apache.hadoop.io.Text;
//White, Tom (2012-05-10). Hadoop: The Definitive Guide (Kindle Locations 5375-5384). OReilly Media - A. Kindle Edition.
public class SequenceFileWriteDemo {
private static final String[] DATA = { "One, two, buckle my shoe", "Three, four, shut the door", "Five, six, pick up sticks", "Seven, eight, lay them straight", "Nine, ten, a big fat hen" };
public static void main( String[] args) throws IOException {
String uri = args[ 0];
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create( uri), conf);
Path path = new Path( uri);
IntWritable key = new IntWritable();
Text value = new Text();
SequenceFile.Writer writer = null;
try {
writer = SequenceFile.createWriter( fs, conf, path, key.getClass(), value.getClass());
for (int i = 0; i < 100; i ++) {
key.set( 100 - i);
value.set( DATA[ i % DATA.length]);
System.out.printf("[% s]\t% s\t% s\n", writer.getLength(), key, value);
writer.append( key, value); }
} finally
{ IOUtils.closeStream( writer);
}
}
}
It depends on what the format of the TXT file is. Is it one line per record? If so, you can simply use TextInputFormat which creates one record for each line. In your mapper you can parse that line and use it whichever way you choose.
If it isn't one line per record, you might need to write your own InputFormat implementation. Take a look at this tutorial for more info.
You can also just create an intermediate table, LOAD DATA the csv contents straight into it, then create a second table as sequencefile (partitioned, clustered, etc..) and insert into select from the intermediate table. You can also set options for compression, e.g.,
set hive.exec.compress.output = true;
set io.seqfile.compression.type = BLOCK;
set mapred.output.compression.codec = org.apache.hadoop.io.compress.SnappyCodec;
create table... stored as sequencefile;
insert overwrite table ... select * from ...;
The MR framework will then take care of the heavylifting for you, saving you the trouble of having to write Java code.
Be watchful with format specifier :.
For example (note the space between % and s), System.out.printf("[% s]\t% s\t% s\n", writer.getLength(), key, value); will give us java.util.FormatFlagsConversionMismatchException: Conversion = s, Flags =
Instead, we should use:
System.out.printf("[%s]\t%s\t%s\n", writer.getLength(), key, value);
If your data is not on HDFS, you need to upload it to HDFS. Two options:
i) hdfs -put on your .txt file and once you get it on HDFS, you can convert it to seq file.
ii) You take text file as input on your HDFS Client box and convert to SeqFile using Sequence File APIs by creating a SequenceFile.Writer and appending (key,values) to it.
If you don't care about key, u can make line number as key and complete text as value.
if you have Mahout installed - it has something called : seqdirectory -- which can do it

Categories

Resources