Write 1 million rows of CSV into S3 by batches - java

I'm trying to build a very large CSV file on S3.
I want to build this file on S3
I want to append rows to this file in batches.
Number of rows could be anywhere between 10k to 1M
Size of each batch could be < 5Mb(So multi-part upload is not feasible)
What would be the right way of accomplishing something like this?

Traditionally in Big Data processing ("Data Lakes"), information related to a single table are stored in a directory rather than a single file. So, appending information to a table is as simple as adding another file to a directory. All files within the directory will need to be the same schema (such as CSV columns, or JSON data).
The directory of files can then be used with tools such as:
Spark, Hive and Presto on Hadoop
Amazon Athena
Amazon Redshift Spectrum
A benefit of this method is that the above systems can process multiple files in parallel rather than being restricted to processing a single file in a single-threaded method.
Also common is to compress the files using technologies like gzip. This lowers storage requirements and makes it faster to read data from disk. Adding additional files is easy (just add another csv.gz file) rather than having to unzip, append and re-zip a file.
Bottom line: It would be advisable to re-think your requirements for "one great big CSV file".

'One big file' isn't going to work for you - you can't append rows to an s3 file, without first downloading the entire file, adding the rows, and then uploading the new file over the old one - for small files, it will work, but as the file gets larger, the bandwidth and processing is going to go up geometrically on you, and may get very slow and possibly expensive.
Better off refactoring your design to work with lots of little files instead of one big one.

Leave a 5MB garbage object sitting on S3 and do concatenation with it where part 1 = 5MB garbage object, part 2 = your file that you want to upload and concatenate. Keep repeating this for each fragment and finally use the range copy to strip out the 5MB garbage.

Related

Java AWS SDK - How to upload multiple parts of one big file into single s3 bucket file without joining it first?

When processing big files, from various reasons I'm already splitting it to 10M chunks, is there a way to upload all parts to s3 to assemble the original file or the only way is first to join the parts and then use MultipartUpload?
it is just doesn't make sense to join and then use MultipartUpload which anyway breaks into pieces and also for very big files (>5GB) it take a while for cat command to be executed and sometimes timed out...

Best way storing multiple files with various size

My server application stores files of different size. Database size goes up to 2-4 TB, where about 90% of all files have less than 100kb, the rest up to 2 GB.
Which would be the best way of storing these files according to following conditions:
* The database should be portable.
It is at the actual solution, as the index, metadata etc. is stored a SQLite3 database. Now it is a pest copying 10 million single small files so I'd prefer store small files inside archives, like 1000x 2gb files.
* Some files are quite big, so BLOB storing may get to a performance problem
Additionally some of the bigger(5 mb - 2gb) files may need RandomAccess reading, which won't be possible with most archive systems
* Some transactions need to read thousands of small files at once
..which works good with BLOB, but I'm not sure if something like a zip archive would perform well in this case. The server can in most cases put all of these files in one archive, but the objects may have to be read in random order.
* Files stored should not be (directly) executable, but no need of encryption
Which system would you recommend for that?

How efficient is writing to an HDFS file in several steps?

I know that an HDFS block size is 64 MB. But let us say I create a new HDFS file, and keep on writing data to it, but at one time write data as little as say just 4KB. Would that be very inefficient? By the end my file could be 1GB in size, but does writting data little by little make writing to such a file inefficient? I mean, is it important to buffer my data before writing to the file. In this case for example, I could keep accumulating data into a buffer, until it reaches a size of 64 MB, and then write it to the HDFS file, and repeat that procedure after clearing that buffer.
First of all HDFS blocksize is up to you, the default is configurable, and you can set a different blocksize for a given file when you put it to HDFS.
If your data is not at hand when you want to place it to HDFS, then use Flume, set up the source to your data generator, and your sink to a file on HDFS, and let the tool do its job without struggling with the details. If the data is in a database, you can turn to Sqoop also.
Otherwise if you are experimenting, then do performance tests, and check which approach is better, it depends heavily on how your data is generated and how you use which library.

HDFS - load mass amount of files

For testing purposes I'm trying to load a massive amount of small files into HDFS. Actually we talk about 1 Million (1'000'000) files with a size from 1KB to 100KB. I generated those files with an R-Script on a Linux-System in one folder. Every file has a information structure that contains a header with product information and a different number of columns with numeric information.
The problem is when I try to upload those local files into HDFS with the command:
hdfs dfs -copyFromLocal /home/user/Documents/smallData /
Then i get one of the following Java-Heap-Size errors:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
I use the Cloudera CDH5 distribution with a Java-Heap-Size about 5 GB. Is there another way than increasing this Java-Heap-Size even more? Maybe a better way to load this mass amount of data into HDFS?
I'm very thankfully for every helpful comment!
If you will increase the memory and store the files in HDFS. After this you will get many problems at the time of processing.
Problems with small files and HDFS
A small file is one which is significantly smaller than the HDFS block size (default 64MB). If you’re storing small files, then you probably have lots of them (otherwise you wouldn’t turn to Hadoop), and the problem is that HDFS can’t handle lots of files.
Every file, directory and block in HDFS is represented as an object in the namenode’s memory, each of which occupies 150 bytes, as a rule of thumb. So 10 million files, each using a block, would use about 3 gigabytes of memory. Scaling up much beyond this level is a problem with current hardware. Certainly a billion files is not feasible.
Furthermore, HDFS is not geared up to efficiently accessing small files: it is primarily designed for streaming access of large files. Reading through small files normally causes lots of seeks and lots of hopping from datanode to datanode to retrieve each small file, all of which is an inefficient data access pattern.
Problems with small files and MapReduce
Map tasks usually process a block of input at a time (using the default FileInputFormat). If the file is very small and there are a lot of them, then each map task processes very little input, and there are a lot more map tasks, each of which imposes extra bookkeeping overhead. Compare a 1GB file broken into 16 64MB blocks, and 10,000 or so 100KB files. The 10,000 files use one map each, and the job time can be tens or hundreds of times slower than the equivalent one with a single input file.
There are a couple of features to help alleviate the bookkeeping overhead: task JVM reuse for running multiple map tasks in one JVM, thereby avoiding some JVM startup overhead (see the mapred.job.reuse.jvm.num.tasks property), and MultiFileInputSplit which can run more than one split per map.
Solution
Hadoop Archives (HAR files)
Create .HAR File
Hadoop Archives (HAR files) were introduced to HDFS in 0.18.0 to alleviate the problem of lots of files putting pressure on the namenode’s memory. HAR files work by building a layered filesystem on top of HDFS. A HAR file is created using the hadoop archive command, which runs a MapReduce job to pack the files being archived into a small number of HDFS files
hadoop archive -archiveName name -p <parent> <src>* <dest>
hadoop archive -archiveName foo.har -p /user/hadoop dir1 dir2 /user/zoo
Sequence Files
The usual response to questions about “the small files problem” is: use a SequenceFile. The idea here is that you use the filename as the key and the file contents as the value. This works very well in practice. Going back to the 10,000 100KB files, you can write a program to put them into a single SequenceFile, and then you can process them in a streaming fashion (directly or using MapReduce) operating on the SequenceFile. There are a couple of bonuses too. SequenceFiles are splittable, so MapReduce can break them into chunks and operate on each chunk independently. They support compression as well, unlike HARs. Block compression is the best option in most cases, since it compresses blocks of several records (rather than per record)
HBase
If you are producing lots of small files, then, depending on the access pattern, a different type of storage might be more appropriate. HBase stores data in MapFiles (indexed SequenceFiles), and is a good choice if you need to do MapReduce style streaming analyses with the occasional random look up. If latency is an issue, then there are lots of other choices
First of all: If this isn't a stress test on your namenode it's ill advised to do this. But I assume you know what you are doing. (expect slow progress on this)
If the objective is to just get the files on HDFS, try doing this in smaller batches or set a higher heap size on your hadoop client.
You do this like rpc1 mentioned in his answer by prefixing HADOOP_HEAPSIZE=<mem in Mb here> to your hadoop -put command.
Try to increase HEAPSIZE
HADOOP_HEAPSIZE=2048 hdfs dfs -copyFromLocal /home/user/Documents/smallData
look here
Hadoop Distributed File system is not good with many small files but with many big files. HDFS keep a record in a look up table that points to every file/block in HDFS and this Look up table usually is loaded in memory. So you should not just increase java heap size but also increase the heap size of the name node inside hadoop-env.sh, this is the default:
export HADOOP_HEAPSIZE=1000
export HADOOP_NAMENODE_INIT_HEAPSIZE="1000"
If you are going to do processing on those files, you should expect low performance on the first MapReduce job you run on them (Hadoop creates number of map tasks as the number of files/blocks and this will overload your system except when you use combineinputformat). advice you to either merge the files into big files (64MB/ 128MB) or use another data source (not HDFS).
For solve this problem, I build a single file with some format. The content of file are all the small files. The format will be like that:
<DOC>
<DOCID>1</DOCID>
<DOCNAME>Filename</DOCNAME>
<DOCCONTENT>
Content of file 1
</DOCCONTENT>
</DOC>
This structure could be more or less field, but the idea is the same. For example, I have use this stucture:
<DOC>
<DOCID>1</DOCID>
Content of file 1
</DOC>
And handle more of six million files.
If you desire process each file for a one map task, you could be delete \n char between and tags. After this, you only parse the structure and have the doc identifier and Content.

Fastest compression for serialzable files in Java

I have a bunch of files (around 4000), each weighting 1-5K more or less,
all created using the serialization mechanism of Java.
I'd like to compress them and send them over a network as a single file.
(They total for around 200-300MB).
I'm looking for a way to increase the compression / decompression speed, without hurting the file size too much (as it should still be sent over the network and get stored in the server).
Currently using the zip package that comes with Apache Ant.
I read that zip files store meta data for each file, so I guess zip files won't be the best choice here.
So what's preferable?
Gzip / Tar?
Or not compressing at all?
Which java library would you recommend for this case?
Thanks in advance.
Not compressing at all would be fastest, but the resulting file size is the downside.
One reason why tar.gz produces smaller file sizes than zip alone is that gzip gets to work with a bigger buffer of data (the whole tar file), while in your case, zip only gets to work with the data from one file at a time (usually a lot less than the size of the tar file, if there are a lot of files).
So gzip gets to compress an entire book with chapters of pages at a time, while zip compresses each chapter of a book and then wraps the compressed chapters up in a book - i.e. compressed collection of objects is usually smaller than a collection of compressed objects.
To produce a similar result to tar.gz, you can zip up the files in the first pass using the 'store'algorithm, and then zip up the resulting zip file using the default deflate algorithm.
A lot depends on the network that you are using.
If its over the internet - you might be better off sending as (say) 50 zipped up files rather than one file. If you transfer the data in one file and the file copy fails - you will have to send it again.
Copying as separate files will allow you to transfer some in parallel and to minimise the risk of a large upload failing.
Another possibility might be switching to another Serialization mechanism. JBoss Serialization is API and functionality compatible, but produces 30% less data.

Categories

Resources