how to optimize image in java for performance - java

im transeffing image throw tcp/ip and i like to optimize it and still good quality as much as possible
what kind of methods or algorithms i can use ?
p.s
now if i think about it maybe i should ask what is the best and the fast way to send image
via tcp/ip

To find the right answer to your question, you need to have a look at the images themselves. Are they real world images captured on camera? Or are they synthetic images, like icons or graphs?
Lossy compression (like JPEG) works very well for real scenes with many gradients and smooth edges. For images with solid colors and hard edges, you have a much higher (even perceived) loss in image quality and less gain in compression rates compared to lossless compression.
Basically, established image formats for your domain are PNG (Portable Network Graphics) and JPEG. PNG images are always compressed lossless, but their compression algorithm works better than competition, i.e. GIF. If the images are well-suited, you gain compression rates comparable to JPEG, if not (like real world images), you gain typical ZIP compression rates (around 50%).
After determining lossy/lossless compression (or a combination, based on picture type -- you could also think of compressing images first in both formats and then compare, if processing time does not matter as much as network througput), you should also take the advantage of progressive coding, which is supported both by JPEG and PNG formats.
With progressive coding, basically the data is organized in a way that the more data you receive, the better the quality (other than just sending the images row-by-row). The advantage here is that you can show the image to the user already while it is still being received. However, for this you need a decoder who exposes this functionality.
I don't know about the libraries available in Java for this.

You should check Java Advanced Imaging API.
But to use it effectively you will need to understand what type of image operations are right for your problem. This will depend, among other things, on the encoding of your source image.
As for the "good quality as much as possible", you will most likely need to experiment with various compression techniques and their relevant parameters before deciding which one gives the right balance of speed, size and quality for your needs.

You may take a look at this. It's a comparison between common compression algorithms (quality and compression rate).
Edit: it is not directly java, but you probably can find an implementation of the desired algorithm.

For images intended for human viewing JPEG is quite nice. What is in the remote end? A browser?

Related

How do I live encode a video in Java with JCodec?

In short, my question is: Which is the fastest format for encoding with JCodec, without loosing too much quality (like mangled colors)?
An example to what I mean with "Mangled colours, can be found in the videos in the description of this issue.
The rest here is contextual information to my considerations and what I have tried:
I am creating a screen recorder in Java. I have solved the issue of getting more than 10 FPS as BufferedImages (at least on Windows. Xorg is not very cooperative), but encoding is not fast enough to follow.
My solution is threaded with a producer, ad consumer and a BlockingQueue for transfering frames.
I need it to be able to encode at least 15 FPS full HD, but more is better.
I probably need to re-encode after encoding the first time, but for now, I just want to store the frames without loosing too much quality, and saving at least some bits.
I am considering PRORES, since other formats does not seem to play well (most just doesn't write anything, and h.264 mangles the colours), but is that a viable alternative?
Other ways of storing a lot of BufferedImage objects are welcome too, but I would prefer encoding directly to video. (I was considering writing PNGs or BMPs enumerated to a zip, but have not gotten my head around it yet.)

Compress images more

Is it possible to compress jpg and/or png images more in a loss-less manner? I've found websites that do this, but I assumed that these files were all compressed the same way.
By compress, I mean it is still a jpg or png file that any image program can read, but the file size is smaller.
And if so, what is the best way in both Java and C# to get the additional compression?
There are several compression algorithms out there some achieve better results than others in different images. I am not sure exactly what your question wants to know as you mix Java and C# with image compression.
Recently enough google developers have released a new format called WebP. They claim this format can deliver 26% smaller lossless images when compared with PNG's and lossy even better when compared to JPEG however the support for WebP is still quite limited.
What you can do is make use of the picture HTML element and deliver an alternative WebP with a fallback to a PNG or JPEF file.
here's an example:
<picture>
<source srcset="img/awesomeWebPImage.webp" type="image/webp">
<source srcset="img/creakyOldJPEG.jpg" type="image/jpeg">
<img src="img/creakyOldJPEG.jpg" alt="Alt Text!">
</picture>
Here's a good article on using WebP: https://css-tricks.com/using-webp-images/
All JPEG compression is inherently lossy. The RGB<->YCrCb conversion often requires modifying values because the gamuts of the color spaces are different. JPEG compression involves floating point values rounded to integers.
There are a lot of JPEG setting available to trade off loss against size. Whether you have access to them or not depends upon your encoder.
In the case of PNG, the only real size setting is how far the encoder will search the LZ window for matches. In PNG, it's a tradeoff between compression speed and size.

Alternatives for generating a video feed from screenshots

I'm working in a remote administration toy project. For now, I'm able to capture screenshots and control the mouse using the Robot class. The screenshots are BufferedImage instances.
First of all, my requirements:
- Only a server and a client.
- Performance is important, since the client might be an Android app.
I've thought on opening two socket connections, one for mouse and system commands and the second one for the video feed.
How could I convert the screenshots to a video stream? Should I convert them to a known video format or would it be ok to just send a series of serialized images?
The compression is another problem. Sending the screen captures in full resolution would result in a low frame rate, according to my preliminary tests. I think I need at least 24 fps to perceive movement, so I've to both downscale and compress. I could convert the BufferedImages to jpg files and then set the compression rate, but I don't want to store the files on disk, they should live in RAM only. Another possibility would be to serialize instances (representing an uncompressed screenshot) to a GZipOutputStream. What is the correct approach for this?
To summarize:
In case you recommend the "series of images" approach, how would you serialize them to the socket OutputStream?
If your proposition is to convert to a know video format, which classes or libraries are available?
Thanks in advance.
UPDATE: my tests, client and server on same machine
-Full screen serialized BufferedImages (only dimension, type and int[]), without compression: 1.9 fps.
-full screen images through GZip streams: 2.6 fps.
-Downscaled images (640 width) and GZip streams: 6.56 fps.
-Full screen images and RLE encoding: 4.14 fps.
-Downscaled images and RLE encoding: 7.29 fps.
If its just screen captures, I would not compress them using a Video compression scheme, most likely you don't want lossy compression (blurred details in small text etc are the most common defects).
For getting a workable "remote Desktop" feel, remember the previously sent Screenshot and send only the difference to get to the next one. If nothing (or very little) changes between frames this is very efficient.
It will however not work well in certain situations like playing a video, game or scrolling a lot in a document.
Compressing the difference between two BufferedImage can be done with more or less elaborate methods, a very simple, yet reasonably effective method is simply to subtract one image from the other (resulting in zeros everywhere they are identical) and compressing the result with simple RLE (run length encoding).
Reducing the color precision can be used to further reduce the amount of data (depending on the use case you could omit the least significant N bits of each color channel, for most GUI applications look not much different if you reduce colors from 24 bits to 15 bits).
Break the screen up into a grid squares (or strips)
Only send the grid square if it's different from the previous
// server start
sendScreenMetaToClient(); // width, height, how many grid squares
...
// server loop ImageBuffer[] prevScrnGrid while(isRunning) {
ImageBuffer scrn = captureScreen();
ImageBuffer[] scrnGrid = screenToGrid(scrn);
for(int i = 0; i < scrnGrid.length; i++) {
if(isSameImage(scrnGrid[i], prevScrnGrid[i]) == false) {
prevScrnGrid[i] = scrnGrid[i];
sendGridSquareToClient(i, scrnGrid[i]); // send the client a message saying it will get grid square (i) then send the bytes for grid square (i)
}
} }
Don't send serialized java objects just send the image data.
ByteArrayOutputStream imgBytes = new ByteArrayOutputStream();
ImageIO.write( bufferedImage, "jpg", imgBytes );
imgBytes.flush();
Firstly, I might suggest only capturing a small part of the screen, rather than downscaling and potentially losing information, perhaps with something like a sliding window which can be moved around by pushing the edges with a cursor. This is really just a small design suggestion though.
As for compression, I would think that a series of images would not compress separately as well as with a decent video compression scheme, especially as frames are likely to remain consistent between captures in this scenario.
One option would be to use Xuggle, which is capable of capturing the desktop via Robot in a number of video formats afaiu, but I can't tell if you can stream and decode with this.
For capturing jpegs and converting them, then you can also use this.
Streaming these videos seems to be a little more complicated, though.
Also, it seems that the abandoned Java Media Framework supports this functionality.
My knowledge in this area is not fantastic tbh, so sorry if I have wasted your time, but it looks like some more useful information on the feasibility of using Xuggle as a screensharer has been compiled here. This also appears to link to their own notes on existing approaches.
If it doesn't need to be pure Java I reckon this would all be much easier using just by interfacing with a native screen capture tool...
Maybe it would be easiest just to send video as a series of jpegs after all! You could always implement your own compression scheme if you were feeling a little crazy...
I think you described a good solution in your question. Convert the images to jpeg, but don't write them as files to disk. If you want it to be a known video format, use M-JPEG. M-JPEG is a stream of jpeg frames in a standard format. Many digital cameras, especially older ones, save videos in this format.
You can get some information about how to play an M-JPEG stream from this question's answers: Android and MJPEG
If network bandwidth is a problem, then you'll want to use an inter-frame compression system such as MPEG-2, h.264, or similar. That requires a lot more processing than M-JPEG but is far more efficient.
If you're trying to get 24fps video then there's no reason not to use modern video codecs. Why try and recreate that wheel?
Xuggler works fine for encoding h264 video and sounds like it would serve your needs nicely.

Fast way to compress binary data?

I have some binary data (pixel values) in a int[] (or a byte[] if you prefer) that I want to write to disk in an Android app. I only want to use a small amount of processing time but want as much compression as I can for this. What are my options?
In many cases the array will contain lots of consecutive zeros so something simple and fast like RLE compression would probably work well. I can't see any Android API functions for this though. If I have to loop over the array in Java, this will be very slow as there is no JIT on most Android devices. I could use the NDK but I'd rather avoid this if I can.
DeflatorOutputStream takes ~25 ms to compress 1 MB in Java. Its a native method so a JIT should not make much difference.
Do you have a requirement which says 0.2s or 0.5s is too slow?
Can you do it in a background thread so the user doesn't notice how long it takes?
GZIP is based on the Deflator + CRC32 so is likely to be much the same or slightly slower.
Deflator has several modes. The DEFAULT_STRATEGY is fastest in Java, but simpler compressions such as HUFFMAN_ONLY might be faster for you.
Android has Java's DeflaterOutputStream. Would that work?
Pass the byte array to
http://download.oracle.com/javase/6/docs/api/java/io/FileWriter.html
and chain
http://download.oracle.com/javase/1.4.2/docs/api/java/util/zip/GZIPOutputStream.html
to it
then when you need to read the data back in do the reverse
http://download.oracle.com/javase/1.4.2/docs/api/java/io/FileReader.html
and chain
http://download.oracle.com/javase/1.4.2/docs/api/java/util/zip/GZIPInputStream.html
Depending on the size of the file your saving you will see some compression Gzip is good like that, if your not seeing much of a trade of just write the data uncompressed using a buffered writer(That should be the fastest). Also if you do gzip it using a buffered writer reader could also speed it up a bit.
I've had to solve basically the same problem on another platform and my solution was to use a modified LZW compression. First, do some difference filtering (similar to PNG) on the 32bpp image. This will turn most of the image to black if there are large areas of common color. Then use a generic GIF compression algorithm treating the filtered image as if it's 8bpp. You'll get decent compression and it works very quickly. This will need to run in native code (NDK). It's really quite easy to get native code working on Android.
Random thought: if it's image data, try saving it as png. Standard java has it, i'm sure android will too, and probably optimized with native code. It has pretty good compression and it's lossless.

Java: make an low quality image

In the software 'Teamviewer', the quality of the images can be changed. It looks like the image comes from 32bit to 16bit (Or other values, like in the screen device settings in Windows). The image is realy smaller because you notice that the speed of the desktop sharing gets higher. I don't want something like: "scale down, send and than scale up".
Now my question: Is it possible to make a low-quality image.
Thanks
You have four alternatives for lossy compression:
reduce spatial resolution (size)
reduce bitdepth
compress in another domain (JPEG)
a combination of these
And you will probably get the best gain with JPEG for rich pictures like photos, and with bitdepth reduction (even down to using 8bit or less palette) on others with less variation in colors. Please note that bitdepth reduction is most effective if combined with lossless compression afterwards, like runlength encoding (did you know that even jpeg uses that?)
Yes, you can change the compression settings for many different types of Images.
Google found this: Adjust JPEG image compression quality when saving images in Java
You can use image converters for this purpose. When user uploads a file its sent to the converter which does its thing (according to defined settings). You would however need access to run applications on the server I think.
ypnos already mentioned bit depth reduction. Reading your question I also immediately though of dithering, which will preserve the image better as you reduce the size of the color space. You can pretty easily find implementations of the Floyd-Steinberg algorithm around the net.

Categories

Resources