I want split an image in number of chunks and merge again to create original image
I used this method to spit image http://kalanir.blogspot.in/2010/02/how-to-split-image-into-chunks-java.html and for merge used this http://kalanir.blogspot.in/2010/02/how-to-merge-multiple-images-into-one.html
But after merging i fount that my new image size is reduce compare to original.
How i do split and merging of image without loosing any information
Not sure if you are:
reading files into memory RGB data
splitting
joining them
writing a new file
But in that case:
When reading the JPEGs you are obtaining unencoded, uncompressed, lossless data.
When you join them you are encoding again (lossing data) to generate the file.
If the second encoding doesn't do what the original software that encoded the image did, it'll be different.
A starting point is using the same JPEG quality. You can indicate a number from 1 to 100 indicating how much quality you want to keep at a size cost. That number should be at least equal to the number used to encode the original image.
I don't know how exactly to get that or to write that using the libraries you've used, but it should be available.
You can use different image processing java api's like imgscalr, ImageJ, Marvin...etc
Related
This is one of those "pretty sure we found the answer, but hoping we're wrong" questions. We are looking at a steganography problem and it's not pretty.
Situation:
We have a series of images. We want to mark them (watermark) so the watermarks survive a series of conditions. The kicker is, we are using a lossfull format, JPG, rather than lossless such as PNG. Our watermarks need to survive screenshotting and, furthermore, need to be invisible to the naked eye. Finally, they need contain at least 32 bytes of data (we expect them to be repeating patterns across an image, of course). Due to the above, we need to hide the information in the pixels themselves. I am trying a Least Significant Bit change, including using large blocks per "bit" (I tried both increments of 16 as these are the jpg compression algorithms size chunks from what we understand, as well as various prime numbers) and reading the average of the resulting block. This sort of leads to requirements:
Must be .jpg
Must survive the jpg compression algorithm
Must survive screenshotting (assume screenshots are saved losslessly)
Problem:
JPG compression, even 100% "minimum loss" changes the pixel values. EG if we draw a huge band across an image setting the Red channel to 255 in a block 64 pixels high, more than half are not 255 in the compiled image. This means that even using an average of the blocks yields the LSB to be random, rather than what we "encoded". Our current prototype can take a random image, compress the message into a bit-endoded string and convert it to a XbyX array which is then superimposed on the image using the LSB of one of the three color-channels. This works and is detectable while it remains a BufferedImage, but once we convert to a JPG the compression destroys the message.
Question:
Is there a way to better control a jpg compression's pixel values? Or are we simply SOOL here and need to drop this avenue, either shifting to PNG output (unlikely) or need to understand the JPG compression's algorithm at length and use it to somehow determine LSB pattern outcomes? Preferably java, but we are open to looking at alternative languages if there are any that can solve our problem (our current PoC is in java)
I need to enlarge already big jpegs, they are used on printing so they need to be really big 300PPI files. The resulting image will be too big to be fully hold in memory. What i thought was something like breaking the original image into small strips, enlarge each one of them separatedly and go writing it in the output file(another jpeg), never keeping the final image fully on menoru. I've read about lossless operations on jpegs it seems the way to go(create a file with the strip and copy the mcus, huffman tables and quantizatuon tables to the final file), also read something about abbreviated streams on java. What is a good way to do this?
Your best bet is to leave the JPEGs alone and have the printing software scale the output to the device.
If you really want to double the size,you could use subsampling. Just double the Y component in each direction and change the sampling for Cb and Cr while leaving the data alone.
You could also do as you say, and recompress in strips of MCUs.
So, lets say I want to recode some PNG to JPEG in Java. The image has extreme resolution, lets say for example 10 000 x 10 000px. Using "standard" Java image API Writers and Reader, you need at some point to have entire image decoded in RAM, which takes extreme amount of RAM space (hundreds of MB). I have been looking how other tools do this, and I found that ImageMagick uses disk pixel storage, but this seems to by way too slower for my needs. So what I need is tru streaming recoder. And by true streaming I mean read and process data by chuncks or bins, not just give stream as input but decode it whole beforehand.
Now, first the theory behind - is it even possible, given JPEG and PNG algorithms, to do this using streams, or lets say in bins of data? So there is no need to have entire image encoded in memory(or other storage)? In JPEG compression, first few stages could be done in streams, but I believe Huffman encoding needs to build entire tree of value probabilities after quantization, therefore it needs to analyze whole image - so whole image needs to be decoded beforehand, or somehow on demand by regions.
And the golden question, if above could be achieved, is there any Java library that can actually work in this way? And save large amount of RAM?
If I create a 10,000 x 10,000 PNG file, full of incompressible noise, with ImageMagick like this:
convert -size 10000x10000 xc:gray +noise random image.png
I see ImageMagick uses 675M of RAM to create the resulting 572MB file.
I can convert it to a JPEG with vips like this:
vips im_copy image.png output.jpg
and vips uses no more than 100MB of RAM while converting, and takes 7 seconds on a reasonable spec iMac around 4 years old - albeit with SSD.
I have thought about this for a while, and I would really like to implement such a library. Unfortunately, it's not that easy. Different image formats store pixels in different ways. PNG or GIFs may be interlaced. JPEGs may be progressive (multiple scans). TIFFs are often striped or tiled. BMPs are usually stored bottom up. PSDs are channeled. Etc.
Because of this, the minimum amount of data you have to read to recode to a different format, may in worst case be the entire image (or maybe not, if the format supports random access and you can live with a lot of seeking back and forth)... Resampling (scaling) the image to a new file using the same format would probably work in most cases though (probably not so good for progressive JPEGs, unless you can resample each scan separately).
If you can live with disk buffer though, as the second best option, I have created some classes that allows for BufferedImages to be backed by nio MappedByteBuffers (memory-mapped file Buffers, kind of like virtual memory). While performance isn't really like in-memory images, it's also not entirely useless. Have a look at MappedImageFactory and MappedFileBuffer.
I've written a PNG encoder/decoder that does that (read and write progressively, which only requires to store a row in memory) for PNG format: PNGJ
I don't know if there is something similar with JPEG
First of all, sorry for my English and for the length of the message.
I'm writing a simple application in Java for visual cryptography for a school project that takes a schema File and a secret image, then creates n images using the information contained in the schema.
For each pixel in the secret image the application looks for a matrix in the schema file and write m pixels in the n shares (one row for each share).
A schema file contains the matrices (n*m) for every color needed for encoding and it is composed as follows
COLLECTION COLOR 1
START MATRIX 1
RGB
GBR
BGR
END
START MATRIX 2
.....
COLLECTION COLOR 2
START MATRIX 1
XXX
XXX
XXX
END
......
//
This file can be a few lines or many thousands so I can't save the matrices in the application, but I need to always read the file.
To test the performance I created a parser that simply search the matrix looking line by line, but it is very slow.
I thought I'd save the line number of each matrix and then use RandomAccessFile to read it but I wanted to know if there is a more powerful method for doing this.
Thanks
If you are truly dealing with massive, massive input files that exceed your ability to load the entire thing into RAM, then using a persistent key/value store like MapDB may be an easy way to do this. Parse the file once and build of an efficient [Collection+Color]->Matrix map. Store that in a persistent HTree. That'll take care of all of the caching, etc... for you. Make sure to create a good hash function for the Collection+Color tuple, and it should be very performant.
If your data access pattern tends to clump together, it may be faster to store in a B+Tree index - you can play with that and see what works best.
For your schema file, use a FileChannel and call .map() on it. With a little effort, you can calculate the necessary offsets into the mapped representation of the file and use that, or even encapsulate this mapping into a custom structure.
I already have successfully coded my Steganography program in a PNG file using Java. My program works very well in both PNG and BMP files. But when I tried running my program in a JPG file, the revealed data is not the same as the original data. Certainly, the headers of each file type isn't the same. And so now I wonder; Do the data structures of PNG and JPG files aren't the same? I need to know exactly how to manipulate the bytes of a JPG file without affecting its header and the footer.
Thanks.
First of all you need to tell the exact method you are using for image steganography e.g. hiding the secret data in the lsb's of the pixels of the image, reading the file in a binary format etc.
If working with lsb's is your procedure then I hope the following answer satisfies your query-
'PNG' and 'BMP' are actually lossless file formats. After manipulating the bits of pixels of these formats when you create the new image no data is lost. This is the reason you are able to retrieve all the hidden data.
'JPG' formats however use a lossy compression technique due to which the data hidden in the pixels is lost. Even I faced this problem and the solution to this exists in handling the image in transform domain. You need to use the Direct Cosine Transform method for its implementation.
The transform domain involves manipulation of the algorithms and image transforms such as discrete cosine transformation (DCT) and wavelet transformation. These methods can hide information in more significant areas of the image and may also manipulate the properties of the image like luminance. These kinds of techniques are more effective than image domain bit-wise steganographic methods. The transform domain techniques can be applied to image of any format. Also, the conversion between lossless and lossly formats may survive.
How DCT works in steganography?
The image is broken down into 8x8 blocks of pixels. DCT is applied to each block from left to right, top to bottom. The quantization table compresses each block to scale the DCT coefficients and the message is embedded in the scaled DCT coefficients.
A lot of reasearch is still required in this method. I am working on its code and will post it ASAP.
It will be a pleasure to hear of other methods or different efficient techniques from other developers.