With GAE we can't pick just about any java compression library that works with GAE out-of-the-box, even this Snappy (which I tried because it was said that its a port of Google's compression lib) library won't work, throwing access denied ("java.io.FilePermission") exception. Which is expected since File I/O is not supported.
So I'd like to ask the community for Java compression libraries that are tested to work with GAE without hacking or repackaging.
Checking class whitelist you could use java.util.zip to read compressed streams
new java.util.zip.GZIPInputStream(inputStream)
and
new java.util.zip.GZIPOutputStream(outputStream)
to compress content to an output stream.
Related
I took adrien grand's java repository providing JNI to the original LZ4 native code.
I want to compress multiple files under a given input directory, but LZ4 doesn't support multiple file compression like in java zip package so I tried another approach where I thought of to tar all my input files and pipe it as input to LZ4 compressor, and I used Jtar java package for taring all my input files. Is there any other better way other than this?
I came across many sample codes to compress some strings and how to correctly implement the LZ4 compressor and decompressor. Now I wanted to know how to actually implement it for multiple files? I also wanted to clarify whether I'm going in the correct direction.
After taring all files, according to sample code usage explanation I've to convert my tared file now to byte array to provide it to compressor module. I used apache-common-ioutil package for this purpose. So considering I've many files as input and which results in a tar of huge size, converting it always to byte array seems ineffective according to me. I wanted to first know whether this is effective or not? or is there any better way of using LZ4 package better than this?
Another problem that I came across was the end result. After compression of the tared files I would get an end result like MyResult.lz4 file as output but I was not able to decompress it using the archive manager ( I'm using ubuntu ) as it doesn't support this format. I'm also not clear about the archive and compression format that I have to use here. I also want to know what format should the end result be in. So now I'm speaking from an user point of view, consider a case where I'm generating a backup for the user if I provide him/her with traditional .zip, .gz or any known formats, the user would be in a position to decompress it by himself. As I know LZ4 doesn't mean I've to expect the user also should know such format right? He may even get baffled on seeing such a format. So this means a conversion from .lz4 to .zip format also seems meaningless. I already see the taring process of all my input files as a time consuming process, so I wanted to know how much it affects the performance. As I've seen in java zip package compressing multiple input files didn't seem to be a problem at all. So next to lz4 I came across Apache common compress and TrueZIP. I also came across several stack overflow links about them which helped me learn a lot. As of now I really wanted to use LZ4 for compression especially due it's performance but I came across these hurdles. Can anyone who has a good knowledge about LZ4 package provide solutions to all my queries and problems along with a simple implementation. Thanks.
Time I calculated for an input consisting of many files,
Time taken for taring : 4704 ms
Time taken for converting file to byte array : 7 ms
Time Taken for compression : 33 ms
Some facts:
LZ4 is no different here than GZIP: it is a single-concern project, dealing with compression. It does not deal with archive structure. This is intentional.
Adrien Grand's LZ4 lib produces output incompatible with the command-line LZ4 utility. This is also intentional.
Your approach with tar seems OK becuase that's how it's done with GZIP.
Ideally you should make the tar code produce a stream which is immediately compressed instead of first being entirely stored in RAM. This is what is achieved at the command line using Unix pipes.
I had the same problem. The current release of LZ4 for Java is incompatible with the later developed LZ4 standard to handle streams, however, in the projects repo there is a patch that supports the standard to compress/decompress streams, and I can confirm it is compatible with the command line tool. You can find it here https://github.com/jpountz/lz4-java/pull/61 .
In Java you can use that together with TarArchiveInputStream from the Apache Commons compress.
If you want an example, the code I use is in the Maven artifact io.github.htools 0.27-SNAPSHOT (or at github) the classes io.github.htools.io.compressed.TarLz4FileWriter and (the obsolete
class) io.github.htools.io.compressed.TarLz4File show how it works. In HTools, tar and lz4 are automatically used through ArchiveFile.getReader(String filename) and ArchiveFileWriter(String filename, int compressionlevel) provided your filename ends with .tar.lz4
You can chain IOStreams together, so using something like Tar Archive from Apache Commons and LZ4 from lz4-java,
try (LZ4FrameOutputStream outputStream = new LZ4FrameOutputStream(new FileOutputStream("path/to/myfile.tar.lz4"));
TarArchiveOutputStream taos = new TarArchiveOutputStream (outputStream)) {
...
}
Consolidating the bytes into a byte array will cause a bottleneck as you are not trying to hold the entire stream in-memory which can easily run into OutOfMemory problems with large streams. Instead, you'll want to pipeline the bytes through all the IOStreams like above.
I created a Java library that does this for you https://github.com/spoorn/tar-lz4-java.
If you want to implement it yourself, here's a technical doc that includes details on how to LZ4 compress a directory using TarArchive from Apache Commons and lz4-java: https://github.com/spoorn/tar-lz4-java/blob/main/SUMMARY.md#lz4
Currently i am working on speed optimization of J2EE Application. The performance of the application is currently(in my case server is running pretty fast) more depended on the amount of time that it takes to download the associated files like js, css etc.
My Question:
Is there any way to compress these files(js, css,
images etc..) and send it to client machine?
I have came through some technologies which compress the js into a single line, but its causing some problems with the current syntax.
I like to know some way to compress and sent the files, if possible, for best client-side performance.
There is basically Two way to achive this functionality are as below
1) minify your css and js with minfy tool which avilable online.
I use Google Closure Tools, it uses Rhino to interpret your code while modifying it to enusre that it still works after minification. Many free tools exists: YUI Compressor, UglifyJS, etc.
UglifyJS is also good as well, try it here http://marijnhaverbeke.nl/uglifyjs
Google Closure Tools: https://developers.google.com/closure/
2) Gzip your css and js.Do server specific configuration to use Gzip
here i have link from where you can learn how GZip work
http://betterexplained.com/articles/how-to-optimize-your-site-with-gzip-compression/
I'm afraid the answer to my question is no, but I'm asking it anyway just in case.
What I would like to do is stream audio from a chrome browser to a server written in Java via WebRTC. My understanding is that to accomplish this I need a Java implementation of peerconnection. All I've found so far is the libjingle Java API for android but that hasn't been particularly useful for integrating into my server app (I'd prefer an actual Java implementation, not just a C++ wrapper).
If a library to do what I want really doesn't exist does anyone have any pointers for how I might approach actually implementing the WebRTC spec myself? When I look at such a large spec I don't really know where to start.
You can use IceLink. We (I helped develop it) wrote a Java implementation for it, as well as .NET and Objective C.
This is a complete Java WebRTC signal server written with Java i/o sockets. https://code.google.com/p/jfraggws/ Just make a project, include the .Java file and include rt.jar in the project. Next set the port on the Html 5 client and plug in your servers IP. You now have java webrtc.
Are there ways to receive a byte array or string array from the serverside to GWT client and open it as file?
The byte array is already in memory and we don't want to write it to a file in the server and pass the URL back to the client.
Thanks
GWT Java is compiled into Javascript.
So, try writing a Javascript-based app first, to open your server file "as a file" by your Javascript client. Even if you do not know Javascript - then at least, perform a thought experiment:
- What are the limitations placed by the brosers
- Why do browser conventions place such limitations?
What your javascript app cannot do, so too your GWT app.
What you are thinking is to use File IO API to access your files sitting on the server. There are two possible reasons why you wish to do that:
You are familiar with File IO and you want to do on GWT Java what you have been doing for years with Java, and you are too fixated to change your perspective.
You want to write a web-based interface to your operating system, and you have grand plans for your app.
If you are in situation 1, you are lucky. You simply need to change your perspective to respect the asynchronicity and remoteness of thin-client-server communications.
But in the case that this is a Mt Everest that you have to climb and you still persist in trying File IO patterns on GWT, then you need to be prepared for a large project. I am saying "File IO patterns" rather than "file io" because you would have to emulate them. Obviously, browser security technology does not yet allow you to open a file on the browser's system. And for that reason, there is no point for GWT to supply that functionality.
Secondly, File IO belongs to the java.io realm. And again, browser security does not allow you most of the functionality of java.io. Without the set of file.io functionality on GWT, how then can you have File IO.
What you can do is to scale down on your expectations of File IO and write down a specification of what the features of File IO you would like to have. Like, open, close, read, etc. Then you write a some GWT Java classes to let you perform those little tit-bits of emulated file IO.
So just now, I decided to google "GWT inputstream outputstream" and there are some opensource projects out there from whom you could borrow/steal some code to achieve your life long fulfilling goal of emulating client-server file io thro GWT.
But my advice to you is, you should translate your spec of functionality into a REST service. REST is how google docs is accessed. Study google data api and learn how they do it, including the authentication framework.
Anyone know of a way to handle ZIP files produced using the SHRINK algorithm? This doesn't appear to be supported by the standard Java ZIP functionality.
We're receiving ZIP files from an upstream system that (amazingly) have SHRINK-based compression in use. This seems to be from an older mainframe-based ZIP encoder that can't be easily modified to use something more modern.
In the interests of accepting an answer, it sounds like it's not possible to do directly in Java without porting code or building a JNI layer to hit native libraries that can handle this.