How to joing to jpegs without loading them in memory - java

I've read something about jpeg Lossless editing, with it is possible to rotate, crop and a few other things without the need to really load the full image into memory, is there by any means a way of joining two jpegs together using this kind of technique? I was thinking something like copying the MCUs, quantization tables and huffman tables from one image into the other, in another user's question someone told it could be done but didn't tell how, the person said it would need reprocessing the edge MCUs or something like that

Related

How to access the raw image data

I'm using metadata-extractor to write a Java application that organizes images and finds duplicates. The API is great, but there's something I cannot figure out.
Suppose I have two JPG images. These images, visually, are exactly the same (i.e. same pixel-wise). However, maybe something within the metadata encapsulated in the file differs.
If I calculate MD5 hashes on each complete file, I will get two different hashes. However, I want to calculate a hash of only the image/pixel data, which would yield the same hash for both files.
So - Is there a way to pull out the raw image/pixel data from the JPG using metadata-extractor so that I can calculate my hash on this?
Also, is Javadoc available for this API? I cannot seem to find it.
You can achieve this using the library's JpegSegmentReader class. It'll let you pull out the JPEG segments that contain image data and ignore metadata segments.
I discussed this technique in another answer and the asker indicated they had success with the approach.
This would actually make a nice sample application for the library. If you come up with something and feel like sharing, please do.

Enlarge already big jpegs using less memory

I need to enlarge already big jpegs, they are used on printing so they need to be really big 300PPI files. The resulting image will be too big to be fully hold in memory. What i thought was something like breaking the original image into small strips, enlarge each one of them separatedly and go writing it in the output file(another jpeg), never keeping the final image fully on menoru. I've read about lossless operations on jpegs it seems the way to go(create a file with the strip and copy the mcus, huffman tables and quantizatuon tables to the final file), also read something about abbreviated streams on java. What is a good way to do this?
Your best bet is to leave the JPEGs alone and have the printing software scale the output to the device.
If you really want to double the size,you could use subsampling. Just double the Y component in each direction and change the sampling for Cb and Cr while leaving the data alone.
You could also do as you say, and recompress in strips of MCUs.

Stream image format conversions and resizing in Java

So, lets say I want to recode some PNG to JPEG in Java. The image has extreme resolution, lets say for example 10 000 x 10 000px. Using "standard" Java image API Writers and Reader, you need at some point to have entire image decoded in RAM, which takes extreme amount of RAM space (hundreds of MB). I have been looking how other tools do this, and I found that ImageMagick uses disk pixel storage, but this seems to by way too slower for my needs. So what I need is tru streaming recoder. And by true streaming I mean read and process data by chuncks or bins, not just give stream as input but decode it whole beforehand.
Now, first the theory behind - is it even possible, given JPEG and PNG algorithms, to do this using streams, or lets say in bins of data? So there is no need to have entire image encoded in memory(or other storage)? In JPEG compression, first few stages could be done in streams, but I believe Huffman encoding needs to build entire tree of value probabilities after quantization, therefore it needs to analyze whole image - so whole image needs to be decoded beforehand, or somehow on demand by regions.
And the golden question, if above could be achieved, is there any Java library that can actually work in this way? And save large amount of RAM?
If I create a 10,000 x 10,000 PNG file, full of incompressible noise, with ImageMagick like this:
convert -size 10000x10000 xc:gray +noise random image.png
I see ImageMagick uses 675M of RAM to create the resulting 572MB file.
I can convert it to a JPEG with vips like this:
vips im_copy image.png output.jpg
and vips uses no more than 100MB of RAM while converting, and takes 7 seconds on a reasonable spec iMac around 4 years old - albeit with SSD.
I have thought about this for a while, and I would really like to implement such a library. Unfortunately, it's not that easy. Different image formats store pixels in different ways. PNG or GIFs may be interlaced. JPEGs may be progressive (multiple scans). TIFFs are often striped or tiled. BMPs are usually stored bottom up. PSDs are channeled. Etc.
Because of this, the minimum amount of data you have to read to recode to a different format, may in worst case be the entire image (or maybe not, if the format supports random access and you can live with a lot of seeking back and forth)... Resampling (scaling) the image to a new file using the same format would probably work in most cases though (probably not so good for progressive JPEGs, unless you can resample each scan separately).
If you can live with disk buffer though, as the second best option, I have created some classes that allows for BufferedImages to be backed by nio MappedByteBuffers (memory-mapped file Buffers, kind of like virtual memory). While performance isn't really like in-memory images, it's also not entirely useless. Have a look at MappedImageFactory and MappedFileBuffer.
I've written a PNG encoder/decoder that does that (read and write progressively, which only requires to store a row in memory) for PNG format: PNGJ
I don't know if there is something similar with JPEG

Images as game levels

I have been searching the past few days but can't seem to find anything on how to read .png files and then build levels off of that. I already know how to load images and files, but how does one go about pulling data out of them in order to build game levels. Anyone care to enlighten me? By the way I use Java.
You are thinking too high-level. The programming language doesn't know what a "game" is, or a "level." You can load an image file, that's great -- now you have a set of binary data in memory. There is no meaning attached to those bits. What you need is a model representing your level; perhaps, for example, you could simply have two images, one of which is the 'background' and one of which is an occlusion map. For example, black areas on the second image are impassable/blocking, while the first image is simply the level as it is displayed.
When you're writing in a "real" programming language, and not a game-building toolkit, the building of a model to represent your problem is your responsibility.

Any way to easily show a very large image in a Java Applet (using lesser parts)?

I need the functionality to show large images on a Java Applet without increasing the default heap space. I'm getting the "Out of memory" exception when reading the large image using ImageIO.read() into a BufferedImage (what was expected to happen).
To solve this issue, the first thing that comes to mind is to divide the large image into lesser parts.
My question is:
Does anyone have some tips on which direction to take to implement this functionality (which includes showing the correct part when scrolling the image)? Does Java have anything built-in to support this? Any examples?
You must work with parts. The simplest is to have the server provider smallish tiles you Can request As needed.
If not you must dó it yourself in the applet. Load the image - split it in tiles and have each tile reencode itself to a bytestream e.g as jpeg. Attach the rendered tile as a soft reference so it Will be garbage collected as late as possible. If the rendered tile has been gc'ed then decode the jpeg bytestream.
If you have a LOT then even consider serializong tiles to local storage.

Categories

Resources