Reading a file from tar.gz archive in Spark - java

I have a bunch of tar.gz files which I would like to process with Spark without decompressing them.
A single archive is about ~700MB and contains 10 different files but I'm interested only in one of them (which is ~7GB after decompression).
I know that context.textFile supports tar.gz but I'm not sure is it the right tool when an archive contains more then one file. What happens is that Spark will return content of all files (line by line) in the archive including file names with some binary data.
Is there any way to select which file from tar.gz I would like to map?

AFAIK, I'd suggest sc.binaryFiles method... please see below doc. where file name and file content are present, you can map and pickup the file you want and process that.
public RDD<scala.Tuple2<String,PortableDataStream>> binaryFiles(String path,
int minPartitions)
Get an RDD for a Hadoop-readable dataset as PortableDataStream for each file (useful for binary data)
For example, if you have the following files:
hdfs://a-hdfs-path/part-00000
hdfs://a-hdfs-path/part-00001
...
hdfs://a-hdfs-path/part-nnnnn
Do val rdd = sparkContext.binaryFiles("hdfs://a-hdfs-path"),
then rdd contains
(a-hdfs-path/part-00000, its content)
(a-hdfs-path/part-00001, its content)
...
(a-hdfs-path/part-nnnnn, its content)
Also, check this

Related

create zip file without writing to disk

I am working on a Springboot application that has to return a zip file to a frontend when the user downloads some report. I want to create a zip file without writing the zip file or the original files to disk.
The directory I want to zip contains other directories, that contain the actual files. For example, dir1 has subDir1 and subDir2 inside, subDir1 will have two file subDir1File1.pdf and subDir1File2.pdf. subDir2 will also have files inside.
I can do this easily by creating the physical files on the disk. However, I feel it will be more elegant to return these files without writing to disk.
You would use ByteArrayOutputStream if the scope was to write to memory. In essence, the zip file would be entirely contained in memory, so be sure that you don't risk to have too many requests at once and that the file size is reasonable in size! Otherwise this approach can seriously backfire!
You can use following snippet :
public static byte[] zip(final String str) throws IOException {
if (StringUtils.isEmpty(str)) {
throw new IllegalArgumentException("Cannot zip null or empty string");
}
ByteArrayOutputStream bos = new ByteArrayOutputStream();
try (GZIPOutputStream gos = new GZIPOutputStream(bos)) {
gos.write(str.getBytes(StandardCharsets.UTF_8));
}
return bos.toByteArray();
}
But as stated in another answer, make sure you are not risking your program too much by loading everything into your java memory.
Please note that you should stream whenever possible. In your case, you could write your data to https://docs.oracle.com/javase/8/docs/api/index.html?java/util/zip/ZipOutputStream.html.
The only downside of this appproach is: the client won't be able to show a download status bar, because the server will not be able to send the "Content-length" header. That's because the size of a ZIP file can only be known after it has been generated, but the server needs to send the headers first. So - no temporary zip file - no file size beforehand.
You are also talking about subdirectories. This is just a naming issue when dealing with a ZIP stream. Each zip item needs to be named like this: "directory/directory2/file.txt". This will produce subdirectories when unzipping.

Programatically Extract Single Specific File From 7zip Archive - Java - Linux

I would really appreciate your input on the below scenario please.
The requirements:
- I have a 7zip archive file with several thousands of files in it
- I have a java application running on linux that is required to retrieve individual files from the 7 zip file
I would like to retrieve a file from the archive by its path (e.g. my7zFile.7z/file1.pdf) without having to iterate through all the files in the archive and comparing file names.
I would like to avoid having to extract all files from the archive before running the search (the uncompressed archive is several TB).
I had a look into 7zip Java Binding - specifically the IInArchive class, the only extract method seems to work via file index, not via file name:
http://sevenzipjbind.sourceforge.net/javadoc/net/sf/sevenzipjbinding/IInArchive.html
Do you know of any other libraries that could help me with this use case or am I overlooking a way of doing this with 7zip jbinding?
Thank you
Kind regards,
Tobi
Sadly it appears the API doesn't provide enough to fulfill all your requirements. In order to extract a single file it appears you need to walk the archive index. The simplified interface to the archive makes this much easier:
The ISimpleInArchive interface provides:
ISimpleInArchiveItem[] getArchiveItems()
Allowing you to retrieve an list of items in the archive.
The ISimpleInArchiveItem interface provides the method:
java.lang.String getPath()
Hence you can walk the archiveItems comparing on path. Granted this is against your requirements.
However, note this walks the index table and does not extract the files until requested. Once you have the item your after you can use:
ExtractOperationResult extractSlow(ISequentialOutStream SequentialOutStream)
on the item you have found to actually extract it.
Looking at the 7z file format (note this is not the official site of 7zip), the header information is all at the end of the file with the Signature header at the start of the file giving an offset to the start of the header info. So provided the SevenZip bindings are written nicely, your search will at most read the start of the file (SignatureHeader) to find the offset to the HeaderInfo section, then walk the HeaderInfo section in order to build up the file list required in getArchiveItems(). Only once you have the item you need will it shift back to the index of the actual stream for the file you want extracted (most likely when you call extractSlow).
So whilst not all your requirements are met, the overhead of the search/compare required is limited to only searching the header info of the archive.
Once I wrote a code to read from all the files and folders from a zip file. I had a long file(text)/folder hierarchy inside the zip file. I am not sure whether that will help you or not. I am sharing the skeleton of the code.
import java.util.zip.ZipEntry;
import java.util.zip.ZipFile;
ZipFile zipFile = new ZipFile(filepath); // filepath of the zip file
Enumeration<? extends ZipEntry> entries = zipFile.entries();
while (entries.hasMoreElements()) {
ZipEntry entry = entries.nextElement();
if (entry.isDirectory()) { // found directory inside the zipFile
// write your code here
} else {
InputStream stream = zipFile.getInputStream(entry);
BufferedReader reader = new BufferedReader(new InputStreamReader(stream));
// write your code to read the content of the file
}
}
You can modify the code to reach your desired file in the zip. But i don't think you will be able to access the file directly rather you have to walk through all the paths of the zip archive. Note that, ZipFile iterates through all file and folders inside a zipped file in DFS (Depth First Search) manner. You will find detailed relevant examples in web.

Multiple directories as Input format in hadoop map reduce

I am trying to run a graph verifier app in distributed system using hadoop.
I have the input in the following format:
Directory1
---file1.dot
---file2.dot
…..
---filen.dot
Directory2
---file1.dot
---file2.dot
…..
---filen.dot
Directory670
---file1.dot
---file2.dot
…..
---filen.dot
.dot files are files storing the graphs.
Is it enough for me to add the input directories path using FileInputFormat.addInputPath()?
I want hadoop to process the contents of each directory in same node because the files present in each directory contains data that depends on the presence of other files of the same directory.
Will the hadoop framework take care of distributing the directories equally to various nodes of the cluster(e.g. directory 1 to node1 , directory 2 to node2....so on) and process in parallel?
The files in each directory is dependent on each other for data(to be precise...
each directory contains a file(main.dot which has acyclic graph whose vertices are the names of the rest of the files,
so my verifier will traverse each vertex of graph present in main.dot, search for the file of the same name in the same directory and if found processes the data in that file.
similarly all the files will be processed and the combined output after processing each file in the directory is displayed,
same procedure goes for rest of the directories.)
Cutting long story short
As in famous word count application(if the input is a single book), hadoop will split the input and distribute the task to each node in the cluster where the mapper process each line and count the relevant word.
How can i split the task here(do i need to split by the way?)
How can i leverage hadoop power for this scenario, some sample code template will help for sure:)
The soln given by Alexey Shestakov will work. But it is not leveraging MapReduce's distributed processing framework. Probably only one map process will read the file ( file containing paths of all input files) and then process the input data.
How can we allocate all the files in a directory to a mapper, so that there will be number of mappers equal to number of directories?
One soln could be using "org.apache.hadoop.mapred.lib.MultipleInputs" class.
use MultipleInputs.addInputPath() to add the directories and map class for each directory path. Now each mapper can get one directory and process all files within it.
You can create a file with list of all directories to process:
/path/to/directory1
/path/to/directory2
/path/to/directory3
Each mapper would process one directory, for example:
#Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
FileSystem fs = FileSystem.get(context.getConfiguration());
for (FileStatus status : fs.listStatus(new Path(value.toString()))) {
// process file
}
}
Will the hadoop framework take care of distributing the directories equally to various nodes of the cluster(e.g. directory 1 to node1 , directory 2 to node2....so on) and process in parallel?
No, it won't. Files are not distributed to each node in the sense that the files are copied to the node to be processed. Instead, to put it simply, each node is given a set of file paths to process with no guarantee on location or data locality. The datanode then pulls that file from HDFS and processes it.
There's no reason why you can't just open other files you may need directly from HDFS.

Read tgz w/out unpacking it onto computer or Unpack as temp & delete when program closes?

Hey guys I'm currently using jarchivelib which can be found Here I'm stuck on figuring out a way to read the file without having to use the unpack method because it makes a file of the unpacked version. EX:
File archive = new File("/home/jack/archive.zip");
File destination = new File("/home/jack/archive");
Archiver archiver = ArchiverFactory.createArchiver(ArchiveFormat.ZIP);
archiver.extract(archive, destination);
I want to make it so i don't have to unpack it to read the files... If there is no way to do that I'm guessing in my method for Jframe.setDefualtCloseOpperation i'll have to make a custom one so it deletes the files? or is there a better way for handling temp files?
If all you want to do is to extract the file, why not use Java's built in zip to extract the file or if it is password protected you can use Zip4j. These libraries support streams, so that you can extract the contents of the file without writing it a FileStream
As of version 0.4.0, the jarchivelib Archiver API supports streaming an archive rather than extracting it directly onto the filesystem.
ArchiveStream stream = archiver.stream(archive);
ArchiveEntry entry;
while((entry = stream.getNextEntry()) != null) {
// access each archive entry individually using the stream
// or extract it using entry.extract(destination)
// or fetch meta-data using entry.getName(), entry.isDirectory(), ...
}
stream.close();
when the stream is pointing to an entry after calling getNextEntry, you can use the stream.read methods just as you would reading an individual entry.

Can I store a file in an ArrayList in Java using getResource?

New to Java. I am building a Java HTTP server (no special libraries allowed). There are certain files I need to serve (templates is what I call them) and I was serving them up using this piece of code:
this.getClass().getResourceAsStream("/http/templates/404.html")
And including them in my .jar. This was working. (I realize I was reading them as an input stream.)
Now I want to store all of my files (as File type) for templates, regular files, redirects in a hashmap that looks like this: url -> file. The I have a Response class that serves up the files.
This works for everything except my templates. If I try to insert the getResource code in the hashmap, I get an error in my Response class.
This is my code that I am using to build my hashmap:
new File(this.getClass().getResource("/http/templates/404.html").getFile())
This is the error I'm getting:
Exception in thread "main" java.io.FileNotFoundException: file:/Users/Kelly/Desktop/Java_HTTP_Server/build/jar/server.jar!/http/templates/404.html (No such file or directory)
I ran this command and can see the templates in my jar:
jar tf server.jar
Where is my thinking going wrong? I think I'm missing a piece to the puzzle.
UPDATE: Here's a slice of what I get when I run the last command above...so I think I have the path to the file correctly?
http/server/serverSocket/SystemServerSocket.class
http/server/serverSocket/WebServerSocket.class
http/server/ServerTest.class
http/templates/
http/templates/404.html
http/templates/file_directory.html
http/templates/form.html
The FileNotFoundException error you are getting is not from this line:
new File(this.getClass().getResource("/http/templates/404.html").getFile())
It appears that after you are storing these File objects in hash map, you are trying to read the file (or serve the file by reading using FileInputStream or related APIs). It would have been more useful if you had given the stack trace and the code which is actually throwing this exception.
But the point is that files present within the JAR files are not the same as files on disk. In particular, a File object represents an abstract path name on disk and all standard libraries using File object assume that it is accessible. So /a/path/like/this is a valid abstract path name, but file:/Users/Kelly/Desktop/Java_HTTP_Server/build/jar/server.jar!/http/templates/404.html is not. This is exactly what you get when you call getResource("/http/templates/404.html").getFile(). It just returns a string representing something that doesn't exist as a file on disk.
There are two ways you can serve resources from class path directly:
Directly return the stream as a response to the request. this.getClass().getResourceAsStream() will return you the InputStream object which you can then return to the caller. This will require you to store an InputStream object in your hash map instead of a file. You can have two hash maps one for files from class path and one for files on disk.
Extract all the templates (possibly on first access) to a temporary location say /tmp and then store the File object representing the newly extracted file.

Categories

Resources