Handling ZIP content in Java that uses the SHRINK algorithm - java

Anyone know of a way to handle ZIP files produced using the SHRINK algorithm? This doesn't appear to be supported by the standard Java ZIP functionality.
We're receiving ZIP files from an upstream system that (amazingly) have SHRINK-based compression in use. This seems to be from an older mainframe-based ZIP encoder that can't be easily modified to use something more modern.

In the interests of accepting an answer, it sounds like it's not possible to do directly in Java without porting code or building a JNI layer to hit native libraries that can handle this.

Related

Decoding/Processing TTA Files Using Java

For a side-project to learn web development and database management I decided I to make a radio-like website using the vast amount of TTA audio files that I have (along with the Cue files that complement them).
However, the biggest hurdle over this project has been, well, handling the audio itself. I'm not really sure how to go about decoding the TTA files and using that information to utilize the Cues properly.
I've been all over the Tau Projects website and just really at a loss on how to proceed. Some options I have considered, but not sure how they'd work/go about them.
1. Audio Library that Supports TTA
The easiest solution if one exists. From the very few audio libraries that exist for Java (at least that I know exist) none of them support the TTA format. If I am mistaken, please correct me.
2. Using JNI/JNA to Hook into Decoders
The Tau Projects has a list of downloads for various plugins and I assume implementations for encoders/decoders (like ttaenc-win). Now I do not know much about anything for C/C++ so this may come as a challenge for me to use. From what I can understand about the libtta++ download is it's just a bunch of interfaces. I can only assume that the ttaenc-win is just an implementation of this (it's only a .exe file so not too sure). If one of the downloads is an implementation of these interfaces then I can use JNI/JNA to hook into them and process the audio that way. If this is the case then it can work, but I just need some guidance.
3. Using vlcj
VLC supports the TTA format so it may be possible to use vlcj to process the audio that way. However, from my understanding on how vlcj works, it would create an instance of VLC every single time I want to use it (and would also require VLC to be installed). If this is true then this solution would be grossly inefficient and wouldn't scale at all even for a small amount of requests (as I assume I would need a new VLC instance for each user).
4. Writing a Pure Java Implementation
Being open source means the algorithm can very well just be natively made. But as I said earlier, I do not understand C/C++ code to make a Java-equivalent implementation and I'm not smart enough to decipher the information about the algorithm on the Tau Projects site (information can be found here and here).
This solution would be by far the most lightweight, versatile, and portable option. If it can be achieved, this would probably be the route I'd like to pursue, but I would need guidance on it.

Efficient LZ4 multiple file compression using java

I took adrien grand's java repository providing JNI to the original LZ4 native code.
I want to compress multiple files under a given input directory, but LZ4 doesn't support multiple file compression like in java zip package so I tried another approach where I thought of to tar all my input files and pipe it as input to LZ4 compressor, and I used Jtar java package for taring all my input files. Is there any other better way other than this?
I came across many sample codes to compress some strings and how to correctly implement the LZ4 compressor and decompressor. Now I wanted to know how to actually implement it for multiple files? I also wanted to clarify whether I'm going in the correct direction.
After taring all files, according to sample code usage explanation I've to convert my tared file now to byte array to provide it to compressor module. I used apache-common-ioutil package for this purpose. So considering I've many files as input and which results in a tar of huge size, converting it always to byte array seems ineffective according to me. I wanted to first know whether this is effective or not? or is there any better way of using LZ4 package better than this?
Another problem that I came across was the end result. After compression of the tared files I would get an end result like MyResult.lz4 file as output but I was not able to decompress it using the archive manager ( I'm using ubuntu ) as it doesn't support this format. I'm also not clear about the archive and compression format that I have to use here. I also want to know what format should the end result be in. So now I'm speaking from an user point of view, consider a case where I'm generating a backup for the user if I provide him/her with traditional .zip, .gz or any known formats, the user would be in a position to decompress it by himself. As I know LZ4 doesn't mean I've to expect the user also should know such format right? He may even get baffled on seeing such a format. So this means a conversion from .lz4 to .zip format also seems meaningless. I already see the taring process of all my input files as a time consuming process, so I wanted to know how much it affects the performance. As I've seen in java zip package compressing multiple input files didn't seem to be a problem at all. So next to lz4 I came across Apache common compress and TrueZIP. I also came across several stack overflow links about them which helped me learn a lot. As of now I really wanted to use LZ4 for compression especially due it's performance but I came across these hurdles. Can anyone who has a good knowledge about LZ4 package provide solutions to all my queries and problems along with a simple implementation. Thanks.
Time I calculated for an input consisting of many files,
Time taken for taring : 4704 ms
Time taken for converting file to byte array : 7 ms
Time Taken for compression : 33 ms
Some facts:
LZ4 is no different here than GZIP: it is a single-concern project, dealing with compression. It does not deal with archive structure. This is intentional.
Adrien Grand's LZ4 lib produces output incompatible with the command-line LZ4 utility. This is also intentional.
Your approach with tar seems OK becuase that's how it's done with GZIP.
Ideally you should make the tar code produce a stream which is immediately compressed instead of first being entirely stored in RAM. This is what is achieved at the command line using Unix pipes.
I had the same problem. The current release of LZ4 for Java is incompatible with the later developed LZ4 standard to handle streams, however, in the projects repo there is a patch that supports the standard to compress/decompress streams, and I can confirm it is compatible with the command line tool. You can find it here https://github.com/jpountz/lz4-java/pull/61 .
In Java you can use that together with TarArchiveInputStream from the Apache Commons compress.
If you want an example, the code I use is in the Maven artifact io.github.htools 0.27-SNAPSHOT (or at github) the classes io.github.htools.io.compressed.TarLz4FileWriter and (the obsolete
class) io.github.htools.io.compressed.TarLz4File show how it works. In HTools, tar and lz4 are automatically used through ArchiveFile.getReader(String filename) and ArchiveFileWriter(String filename, int compressionlevel) provided your filename ends with .tar.lz4
You can chain IOStreams together, so using something like Tar Archive from Apache Commons and LZ4 from lz4-java,
try (LZ4FrameOutputStream outputStream = new LZ4FrameOutputStream(new FileOutputStream("path/to/myfile.tar.lz4"));
TarArchiveOutputStream taos = new TarArchiveOutputStream (outputStream)) {
...
}
Consolidating the bytes into a byte array will cause a bottleneck as you are not trying to hold the entire stream in-memory which can easily run into OutOfMemory problems with large streams. Instead, you'll want to pipeline the bytes through all the IOStreams like above.
I created a Java library that does this for you https://github.com/spoorn/tar-lz4-java.
If you want to implement it yourself, here's a technical doc that includes details on how to LZ4 compress a directory using TarArchive from Apache Commons and lz4-java: https://github.com/spoorn/tar-lz4-java/blob/main/SUMMARY.md#lz4

Speed Optimization in J2EE Application, Client side?

Currently i am working on speed optimization of J2EE Application. The performance of the application is currently(in my case server is running pretty fast) more depended on the amount of time that it takes to download the associated files like js, css etc.
My Question:
Is there any way to compress these files(js, css,
images etc..) and send it to client machine?
I have came through some technologies which compress the js into a single line, but its causing some problems with the current syntax.
I like to know some way to compress and sent the files, if possible, for best client-side performance.
There is basically Two way to achive this functionality are as below
1) minify your css and js with minfy tool which avilable online.
I use Google Closure Tools, it uses Rhino to interpret your code while modifying it to enusre that it still works after minification. Many free tools exists: YUI Compressor, UglifyJS, etc.
UglifyJS is also good as well, try it here http://marijnhaverbeke.nl/uglifyjs
Google Closure Tools: https://developers.google.com/closure/
2) Gzip your css and js.Do server specific configuration to use Gzip
here i have link from where you can learn how GZip work
http://betterexplained.com/articles/how-to-optimize-your-site-with-gzip-compression/

What compression library works with GAE

With GAE we can't pick just about any java compression library that works with GAE out-of-the-box, even this Snappy (which I tried because it was said that its a port of Google's compression lib) library won't work, throwing access denied ("java.io.FilePermission") exception. Which is expected since File I/O is not supported.
So I'd like to ask the community for Java compression libraries that are tested to work with GAE without hacking or repackaging.
Checking class whitelist you could use java.util.zip to read compressed streams
new java.util.zip.GZIPInputStream(inputStream)
and
new java.util.zip.GZIPOutputStream(outputStream)
to compress content to an output stream.

what is a good cross-platform css compressor?

i need to compress my css as part of my ant build. i noticed that csstidy does this, but it would not be easy to include this in my ant build because i would need to use a different binary on different platforms.
so, is there a java css compressor that people use?
Check out the Yahoo YUI compressor.
It compresses CSS as well as Javascript, and it's written in Java.
Edit: You should be using some sort of HTTP compression as well, like mod_deflate or mod_gzip.
I recently released CSS Compressor. It's a code fork of YUI Compressor, but adds more compression enhancements that make it more similar to csstidy, particularly with regards to grouping selectors that share the same rules. Obviously it's Java, and can be called via the command-line or used as a library within your own Java app.

Categories

Resources