I am using readCsvFile(path) function in Apache Flink api to read a CSV file and store it in a list variable. How does it work using multiple threads?
For example, is it splitting the file based on some statistics? if yes, what statistics? Or does it read the file line by line and then send the lines to threads to process them?
Here is the sample code:
//default parallelism is 4
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
csvPath="data/weather.csv";
List<Tuple2<String, Double>> csv= env.readCsvFile(csvPath)
.types(String.class,Double.class)
.collect();
Suppose that we have a 800mb CSV file on local disk, how does it distribute the work between those 4 threads?
The readCsvFile() API method internally creates a data source with a CsvInputFormat which is based on Flink's FileInputFormat. This InputFormat generates a list of so-called InputSplits. An InputSplit defines which range of a file should be scanned. The splits are then distributed to data source tasks.
So, each parallel task scans a certain region of a file and parses its content. This is very similar to how it is done by MapReduce / Hadoop.
This is the same as How does Hadoop process records split across block boundaries?
I extract some code from flink-release-1.1.3 DelimitedInputFormat file.
// else ..
int toRead;
if (this.splitLength > 0) {
// if we have more data, read that
toRead = this.splitLength > this.readBuffer.length ? this.readBuffer.length : (int) this.splitLength;
}
else {
// if we have exhausted our split, we need to complete the current record, or read one
// more across the next split.
// the reason is that the next split will skip over the beginning until it finds the first
// delimiter, discarding it as an incomplete chunk of data that belongs to the last record in the
// previous split.
toRead = this.readBuffer.length;
this.overLimit = true;
}
It's clear that if it don't read line delimiter in one split, it will get another split to find.( I haven't find The corresponding code, I will try.)
Plus: the image below is how I find the code, from readCsvFile() to DelimitedInputFormat.
Related
I have a dataflow pipeline and I'm parsing a file if I got any incorrect records then I'm writing it on the GCS bucket, but when there are no errors in the input file data still TextIO writes the empty file on the GCS bucket with a header.
So, how can we prevent this if the PCollection size is zero then skip this step?
errorRecords.apply("WritingErrorRecords", TextIO.write().to(options.getBucketPath())
.withHeader("ID|ERROR_CODE|ERROR_MESSAGE")
.withoutSharding()
.withSuffix(".txt")
.withShardNameTemplate("-SSS")
.withNumShards(1));
TextIO.write() always writes at least one shard, even if it is empty. As you are writing to a single shard anyway, you could get around this behavior by doing the write manually in a DoFn that takes the to-be-written elements as a side input, e.g.
PCollectionView<List<String>> errorRecordsView = errorRecords.apply(
View.<String>asList());
// Your "main" PCollection is a PCollection with a single input,
// so the DoFn will get invoked exactly once.
p.apply(Create.of(new String[]{"whatever"}))
// The side input is your error records.
.apply(ParDo.of(new DoFn<String, String>() {
#ProcessElement
public void processElement(
#Element String unused,
OutputReceiver<String> out,
ProcessContext c) {
List<String> errors = c.sideInput(errorRecordsView);
if (!errors.isEmpty()) {
// Open the file manually and write all the errors to it.
}
}
}).withSideInputs(errorRecordsView);
Being able to do so with the native Beam writes is a reasonable request. This is not supported in the latest release of Beam by setting skipIfEmpty.
I'm using Apache Commons CSV to read a CSV file. The file have an information about the file itself (date and time of generation) at the last line.
|XXXX |XXXXX|XXXXX|XXXX|
|XXXX |XXXXX|XXXXX|XXXX|
|File generation: 21/01/2019 17.34.00| | | |
So while parsing the file, I'm getting this as a record(obviously).
I'm wondering is there any way to get rid of it from parsing and does Apache Commons CSV have any provision to handle it.
It's a while loop and you wouldn't know when you get to the end until you get to the end. You have two options:
Bad option: Read it once and count the number of lines and then
when you read it the second time you can break the loop when you
reach (counter-1) line.
Good option: It seems like your files are pipe delimited so when
you're processing line by line simply make sure that
line.trim().spit("|").length() > 1 or in your case do some work as
long as the number of records per line is greater than 1. This will
ensure you don't apply your logic on the lines with just one column
which happens to be your last row aka footer.
Example taken from Apache commons and modified a litte
Reader in = new FileReader("path/to/file.csv");
Iterable<CSVRecord> records = CSVFormat.RFC4180.parse(in);
for (CSVRecord record : records) {
//all lines except the last will result greater than 1
if (record.size() > 1){
//do your work here
String columnOne = record.get(0);
String columnTwo = record.get(1);
}
}
Apache Commons CSV provides a function to ignore the header (https://commons.apache.org/proper/commons-csv/apidocs/org/apache/commons/csv/CSVFormat.html#withSkipHeaderRecord--), but don't offer a solution to ignore the footer. But you can simply get all records, except the last one by manually ignoring the last record.
I am working on a batch application using Apache Spark, i wanted to write the final RDD as text file, currently i am using saveAsTextFile("filePath") method available in RDD.
My text file contains the fields delimited with \u0001 delimiter. So in the model class toString() method i added all the fields seperated with \u0001 delimiter.
is this the correct way to handle this? or any other best approach available?
Also what if i iterate the RDD and write the file content using FileWriter class available in Java?
Please advise on this.
Regards,
Shankar
To write as a single file there are a few options. If your writing to HDFS or a similar distributed store you can first coalesce your RDD down to a single partition (note your data must then fit on a single worker), or you could collect the data to the driver and then use a filewriter.
public static boolean copyMerge(SparkConf sparkConf, JavaRDD rdd, String dstPath) throws IOException, URISyntaxException {
Configuration hadoopConf = sparkConf.hadoopConfiguration();
hadoopConf.set("fs.s3.awsAccessKeyId", awsAccessKey);
hadoopConf.set("fs.s3.awsSecretAccessKey", awsSecretKey);
String tempFolder = "s3://bucket/folder";
rdd.saveAsTextFile(tempFolder);
FileSystem hdfs = FileSystem.get(new URI(tempFolder), hadoopConfig);
return FileUtil.copyMerge(hdfs, new Path(tempFolder), hdfs, new Path(dstPath), false, hadoopConfig, null);
}
This solution is for S3 or any HDFS system. Achieved in two steps:
Save the RDD by saveAsTextFile, this generates multiple files in the folder.
Run Hadoop "copyMerge".
Instead of doing collect and collecting it to driver I would rather suggest to use coalesce which would be good in reducing memory problems
I am trying to use protocol buffer to record a little market data. Each time I get a quote notification from the market, I take this quote and convert it into a protocol buffers object. Then I call "writeDelimitedTo"
Example of my recorder:
try {
writeLock.lock();
LimitOrder serializableQuote = ...
LimitOrderTransport gpbQuoteRaw = serializableQuote.serialize();
LimitOrderTransport gpbQuote = LimitOrderTransport.newBuilder(gpbQuoteRaw).build();
gpbQuote.writeDelimitedTo(fileStream);
csvWriter1.println(gpbQuote.getIdNumber() + DELIMITER+ gpbQuote.getSymbol() + ...);
} finally {
writeLock.unlock();
}
The reason for the locking is because quotes coming from different markets are handled by different threads, so I was trying to simplify and "serialize" the logging to the file.
Code that Reads the resulting file:
FileInputStream stream = new FileInputStream(pathToFile);
PrintWriter writer = new PrintWriter("quoteStream6-compare.csv", "UTF-8");
while(LimitOrderTransport.newBuilder().mergeDelimitedFrom(stream)) {
LimitOrderTransport gpbQuote= LimitOrderTransport.parseDelimitedFrom(stream);
csvWriter2.println(gpbQuote.getIdNumber()+DELIMITER+ gpbQuote.getSymbol() ...);
}
When I run the recorder, I get a binary file that seems to grow in size. When I use my reader to read from the file I also appear to get a large number of quotes. They are all different and appear correct.
Here's the issue: Many of the quotes appear to be "missing" - Not present when my reader reads from the file.
I tried an experiment with csvWriter1 and csvWriter2. In my writer, I write out a csv file then in my reader I write a second cvs file using the my protobufs file as a source.
The theory is that they should match up. They don't match up. The original csv file contains many more quotes in it than the csv that I generate by reading my protobufs recorded data.
What gives? Am I not using writeDelimitedTo/parseDelimitedFrom correctly?
Thanks!
Your problem is here:
while(LimitOrderTransport.newBuilder().mergeDelimitedFrom(stream)) {
LimitOrderTransport gpbQuote= LimitOrderTransport.parseDelimitedFrom(stream);
The first line constructs a new LimitOrderTransport.Builder and uses it to parse a message from the stream. Then that builder is discarded.
The second line parses a new message from the same stream, into a new builder.
So you are discarding every other message.
Do this instead:
while (true) {
LimitOrderTransport gpbQuote = LimitOrderTransport.parseDelimitedFrom(stream);
if (gpbQuote == null) break; // EOF
The output files produced by my Reduce operation is huge (1 GB after Gzipping). I want it produce break output into smaller files of 200 MB. Is there a property/Java class to split reduce output by size or no. of lines ?
I can not increase the number of reducers because that has negative impact on performance of the hadoop job.
I'm curious as to why you cannot just use more reducers, but I will take you at your word.
One option you can do is use MultipleOutputs and write to multiple files from one reducer. For example, say that the output file for each reducer is 1GB and you want 256MB files instead. This means you need to write 4 files per reducer rather than one file.
In your job driver, do this:
JobConf conf = ...;
// You should probably pass this in as parameter rather than hardcoding 4.
conf.setInt("outputs.per.reducer", 4);
// This sets up the infrastructure to write multiple files per reducer.
MultipleOutputs.addMultiNamedOutput(conf, "multi", YourOutputFormat.class, YourKey.class, YourValue.class);
In your reducer, do this:
#Override
public void configure(JobConf conf) {
numFiles = conf.getInt("outputs.per.reducer", 1);
multipleOutputs = new MultipleOutputs(conf);
// other init stuff
...
}
#Override
public void reduce(YourKey key
Iterator<YourValue> valuesIter,
OutputCollector<OutKey, OutVal> ignoreThis,
Reporter reporter) {
// Do your business logic just as you're doing currently.
OutKey outputKey = ...;
OutVal outputVal = ...;
// Now this is where it gets interesting. Hash the value to find
// which output file the data should be written to. Don't use the
// key since all the data will be written to one file if the number
// of reducers is a multiple of numFiles.
int fileIndex = (outputVal.hashCode() & Integer.MAX_VALUE) % numFiles;
// Now use multiple outputs to actually write the data.
// This will create output files named: multi_0-r-00000, multi_1-r-00000,
// multi_2-r-00000, multi_3-r-00000 for reducer 0. For reducer 1, the files
// will be multi_0-r-00001, multi_1-r-00001, multi_2-r-00001, multi_3-r-00001.
multipleOutputs.getCollector("multi", Integer.toString(fileIndex), reporter)
.collect(outputKey, outputValue);
}
#Overrider
public void close() {
// You must do this!!!!
multipleOutputs.close();
}
This pseudo code was written with the old mapreduce api in mind. Equivalent apis exist using the mapreduce api, though, so either way, you should be all set.
There's no property to do this. You'll need to write your own output format & record writer.