In HDFS processing after each job empty files are created with names like part-m-0000*. Each of these files are empty but they are consuming 64MB of disk space because that is default size of block.
It is necessary to make code changes to skip creation of these files. How do I do this?
Note: I am using org.apache.hadoop.mapreduce.lib.output.MultipleOutputs<KEYOUT,VALUEOUT> to write output records, and not Context, so I anyways end up with output records in files like "successful-m-00000" etc.
According to the Hadoop : The Definitive Guide, so the underlying file system will not take a HDFS block size if the file is empty.
Unlike a filesystem for a single disk, a file in HDFS that is smaller than a single block does not occupy a full block’s worth of underlying storage.
For suppressing the output files if they are empty, use LazyOutputFormat#setOutputFormatClass. Here is the Apache documentation for the same.
Related
I'm trying to build a very large CSV file on S3.
I want to build this file on S3
I want to append rows to this file in batches.
Number of rows could be anywhere between 10k to 1M
Size of each batch could be < 5Mb(So multi-part upload is not feasible)
What would be the right way of accomplishing something like this?
Traditionally in Big Data processing ("Data Lakes"), information related to a single table are stored in a directory rather than a single file. So, appending information to a table is as simple as adding another file to a directory. All files within the directory will need to be the same schema (such as CSV columns, or JSON data).
The directory of files can then be used with tools such as:
Spark, Hive and Presto on Hadoop
Amazon Athena
Amazon Redshift Spectrum
A benefit of this method is that the above systems can process multiple files in parallel rather than being restricted to processing a single file in a single-threaded method.
Also common is to compress the files using technologies like gzip. This lowers storage requirements and makes it faster to read data from disk. Adding additional files is easy (just add another csv.gz file) rather than having to unzip, append and re-zip a file.
Bottom line: It would be advisable to re-think your requirements for "one great big CSV file".
'One big file' isn't going to work for you - you can't append rows to an s3 file, without first downloading the entire file, adding the rows, and then uploading the new file over the old one - for small files, it will work, but as the file gets larger, the bandwidth and processing is going to go up geometrically on you, and may get very slow and possibly expensive.
Better off refactoring your design to work with lots of little files instead of one big one.
Leave a 5MB garbage object sitting on S3 and do concatenation with it where part 1 = 5MB garbage object, part 2 = your file that you want to upload and concatenate. Keep repeating this for each fragment and finally use the range copy to strip out the 5MB garbage.
I know that an HDFS block size is 64 MB. But let us say I create a new HDFS file, and keep on writing data to it, but at one time write data as little as say just 4KB. Would that be very inefficient? By the end my file could be 1GB in size, but does writting data little by little make writing to such a file inefficient? I mean, is it important to buffer my data before writing to the file. In this case for example, I could keep accumulating data into a buffer, until it reaches a size of 64 MB, and then write it to the HDFS file, and repeat that procedure after clearing that buffer.
First of all HDFS blocksize is up to you, the default is configurable, and you can set a different blocksize for a given file when you put it to HDFS.
If your data is not at hand when you want to place it to HDFS, then use Flume, set up the source to your data generator, and your sink to a file on HDFS, and let the tool do its job without struggling with the details. If the data is in a database, you can turn to Sqoop also.
Otherwise if you are experimenting, then do performance tests, and check which approach is better, it depends heavily on how your data is generated and how you use which library.
For testing purposes I'm trying to load a massive amount of small files into HDFS. Actually we talk about 1 Million (1'000'000) files with a size from 1KB to 100KB. I generated those files with an R-Script on a Linux-System in one folder. Every file has a information structure that contains a header with product information and a different number of columns with numeric information.
The problem is when I try to upload those local files into HDFS with the command:
hdfs dfs -copyFromLocal /home/user/Documents/smallData /
Then i get one of the following Java-Heap-Size errors:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
I use the Cloudera CDH5 distribution with a Java-Heap-Size about 5 GB. Is there another way than increasing this Java-Heap-Size even more? Maybe a better way to load this mass amount of data into HDFS?
I'm very thankfully for every helpful comment!
If you will increase the memory and store the files in HDFS. After this you will get many problems at the time of processing.
Problems with small files and HDFS
A small file is one which is significantly smaller than the HDFS block size (default 64MB). If you’re storing small files, then you probably have lots of them (otherwise you wouldn’t turn to Hadoop), and the problem is that HDFS can’t handle lots of files.
Every file, directory and block in HDFS is represented as an object in the namenode’s memory, each of which occupies 150 bytes, as a rule of thumb. So 10 million files, each using a block, would use about 3 gigabytes of memory. Scaling up much beyond this level is a problem with current hardware. Certainly a billion files is not feasible.
Furthermore, HDFS is not geared up to efficiently accessing small files: it is primarily designed for streaming access of large files. Reading through small files normally causes lots of seeks and lots of hopping from datanode to datanode to retrieve each small file, all of which is an inefficient data access pattern.
Problems with small files and MapReduce
Map tasks usually process a block of input at a time (using the default FileInputFormat). If the file is very small and there are a lot of them, then each map task processes very little input, and there are a lot more map tasks, each of which imposes extra bookkeeping overhead. Compare a 1GB file broken into 16 64MB blocks, and 10,000 or so 100KB files. The 10,000 files use one map each, and the job time can be tens or hundreds of times slower than the equivalent one with a single input file.
There are a couple of features to help alleviate the bookkeeping overhead: task JVM reuse for running multiple map tasks in one JVM, thereby avoiding some JVM startup overhead (see the mapred.job.reuse.jvm.num.tasks property), and MultiFileInputSplit which can run more than one split per map.
Solution
Hadoop Archives (HAR files)
Create .HAR File
Hadoop Archives (HAR files) were introduced to HDFS in 0.18.0 to alleviate the problem of lots of files putting pressure on the namenode’s memory. HAR files work by building a layered filesystem on top of HDFS. A HAR file is created using the hadoop archive command, which runs a MapReduce job to pack the files being archived into a small number of HDFS files
hadoop archive -archiveName name -p <parent> <src>* <dest>
hadoop archive -archiveName foo.har -p /user/hadoop dir1 dir2 /user/zoo
Sequence Files
The usual response to questions about “the small files problem” is: use a SequenceFile. The idea here is that you use the filename as the key and the file contents as the value. This works very well in practice. Going back to the 10,000 100KB files, you can write a program to put them into a single SequenceFile, and then you can process them in a streaming fashion (directly or using MapReduce) operating on the SequenceFile. There are a couple of bonuses too. SequenceFiles are splittable, so MapReduce can break them into chunks and operate on each chunk independently. They support compression as well, unlike HARs. Block compression is the best option in most cases, since it compresses blocks of several records (rather than per record)
HBase
If you are producing lots of small files, then, depending on the access pattern, a different type of storage might be more appropriate. HBase stores data in MapFiles (indexed SequenceFiles), and is a good choice if you need to do MapReduce style streaming analyses with the occasional random look up. If latency is an issue, then there are lots of other choices
First of all: If this isn't a stress test on your namenode it's ill advised to do this. But I assume you know what you are doing. (expect slow progress on this)
If the objective is to just get the files on HDFS, try doing this in smaller batches or set a higher heap size on your hadoop client.
You do this like rpc1 mentioned in his answer by prefixing HADOOP_HEAPSIZE=<mem in Mb here> to your hadoop -put command.
Try to increase HEAPSIZE
HADOOP_HEAPSIZE=2048 hdfs dfs -copyFromLocal /home/user/Documents/smallData
look here
Hadoop Distributed File system is not good with many small files but with many big files. HDFS keep a record in a look up table that points to every file/block in HDFS and this Look up table usually is loaded in memory. So you should not just increase java heap size but also increase the heap size of the name node inside hadoop-env.sh, this is the default:
export HADOOP_HEAPSIZE=1000
export HADOOP_NAMENODE_INIT_HEAPSIZE="1000"
If you are going to do processing on those files, you should expect low performance on the first MapReduce job you run on them (Hadoop creates number of map tasks as the number of files/blocks and this will overload your system except when you use combineinputformat). advice you to either merge the files into big files (64MB/ 128MB) or use another data source (not HDFS).
For solve this problem, I build a single file with some format. The content of file are all the small files. The format will be like that:
<DOC>
<DOCID>1</DOCID>
<DOCNAME>Filename</DOCNAME>
<DOCCONTENT>
Content of file 1
</DOCCONTENT>
</DOC>
This structure could be more or less field, but the idea is the same. For example, I have use this stucture:
<DOC>
<DOCID>1</DOCID>
Content of file 1
</DOC>
And handle more of six million files.
If you desire process each file for a one map task, you could be delete \n char between and tags. After this, you only parse the structure and have the doc identifier and Content.
I need to monitor a certain folder for new files, which I need to process.
I have the following requirements:
The filenames of the files are sequence numbers. I need to process each file in order. (Lowest number first, there's no guarantee that each sequence number exists. eg: 1,2,5,8,9
If files already exist in the folder during startup, I need to process them directly
I need a guarantee that I only process each file once
I need to avoid reading incomplete files (which are still being copied)
The service should ofcourse be reliable...
What is the most common way to accomplish this?
I'm using Java SE7 and Spring 4.
I already had a look at the WatchService of Java 7 but it seems to have problems with processing already existing files during startup, and avoid processing incomplete files.
Assembling comments into an answer.
Easiest way to parse the files in the correct order is to load the entire directory file listing into an array / list and then sort the list using an appropriate comparator. E.g. Load files with File.list() or File.listFiles().
This is not the most efficient methodology, but for less than 10,000 files should be adequate unless you need faster startup time performance (I can imagine a small lag before processing begins as all of the files are listed).
To avoid reading incomplete files you should acquire an exclusive FileLock (via a FileChannel which you can get from the FileOutputStream or FileInputStream, however you may not be able to get an exclusive lock from the FileInputStream) on the file. Assuming the OS being used supports file locking (which modern OSes do) and the application writing the file is well behaved and holding a lock (hopefully it is) then as soon as you are able to acquire the lock you know the file is complete.
If for some reason you cannot rely on file locking then you either need to have the writing program first write to a temporary file (perhaps with a different extension) and then atomically move / rename the file (atomic for most OSes if on the same file system / partition), or monitor the file for a period of time to see if further bytes are being written (not the most robust methodology).
Is there any way to access the number of blocks allocated to a file with the standard Java File API? Or even do it with some unsupported & undocumented API underneat. Anything to avoid native code plugins.
I'm talking about the st_blocks field of struct stat that the fstat/stat syscalls work on in Unix.
What I want to do is to create a sparse copy of a file that now has lots of redundant data, i.e. make a new copy of it, only containing the active data but sparsely written to it. Then swap the two files with an atomic rename/link operation. But I need a way to find out how many blocks are allocated to the file beforehand, it might already have been sparsely copied. The old file is then removed.
This will be used to free up disk space in a database application that is 100% Java. The benefit on relying on sparse file support in the filesystem is that I would not have to change the index that point out the location where the data is, that increases the complexity of the task at hand.
I think I can do somewhat well by relying on the file timestamp to see if files have already been cleaned up. But this intrigued me. I can not even find anything in the java 7 NIO.2 API for file attribute access at this level.
The only way I can think of is to use ls -s filename to get the actual size of the file on disk. http://www.lrdev.com/lr/unix/sparsefile.html