When I use the robot class to get multiple screenshots of my screen, then convert the bufferedimage to a byte array, the lengths of the byte arrays vary, sometimes by a lot. Should this be happening? I feel as though the amount of bytes in each picture would be the same.
For background I am trying to speed up a simple screen sharing program. Right now I am sending each picture as a complete byte array which works fine (but is slow). I would like to keep every picture in a buffer, then send only the changes and indexes in the byte arrays of the changes from the last picture to the next, cutting down on data sent over the socket. This isn’t working out as each screenshot has a different sized byte array.
Your screenshots are different sizes because they've got different amounts of complexity.
You haven't included a lot of detail in your question, so I'm going to assume that the screenshot routine you're using returns a PNG image.
The PNG format isn't just an array of (red, green, blue) data with one entry for every pixel of your image. Instead, the format compresses the image data. This means that (for example) an image that was just a single flat colour would be much smaller than one where every pixel was a different colour.
Related
I need to write tests (in Java) for an image capturing application. To be more precise:
Some image is captured from a scanner.
The application returns a JPEG of this image.
The test shall compare the scanned image with a reference image. This reference image has been captured by the identical application and has been verified visually that it contains the same content.
My first idea was comparing the images pixel by pixel but as the application applies a JPG compression the result would never by "100 % identical". Besides the fact that the scanner would capture the image with slight differences each time.
The goal is not to compare two "random images" for similarity like "Do both images show a car?" but rather "How similar is the captured car to the reference image of this car?".
What other possibilities do you see?
Let me say first, that image processing is not my best field, nor am I an expert in any way. However, here is my suggestion: take the absolute value of each pixel in the pictures, in row-major order, respective to each other. Then, average out the differences, and see if the average is within a certain threshold: if so, the images are similar in nature. Another way, would be to go once more pixel by pixel, but instead count the number of pixels that differ by at least x, where x is a number chosen to represent how close the matching needs to be. Then, just see if the different pixels comprise 5% of all pixels, 10%, and so on. The more pixels, the more different the image.
I hope this helps you, and best of luck.P.S. to judge how far two pixels are apart, it's going to be like finding the distance between two points (i.e. sqrt[(r2-r1)^2+(g2-g1)^2+(b2-b1)^2]).P.P.S. This all of course assumes that you listened to the Hovercraft Full of Eels, and have somehow made sure that the images match up in size and position.
I had a question concerning jpg image creation ImageIO.write(imgStega, "jpeg", file) :
I am doing some steganography, and I have to hide data in least significant bit of each pixel. I do this with getRGBA()[pos], which provide me Red, Blue, Green, Alpha components. Then I change each value with a +1 or -1 depending on a %2.
The problem is, every time I use ImageIO.write, it changes all my image at random (it is compressing). So, how can I save my image as it is ? I don't see any solution to do steganography on a real image.
Whether I use png or jpg is the same, the weight changes. Do you know a way to save my image the way it is ?
Thanks in advance !
JPEG is lossy by definition, so the data modifications that you see are expected and there is not much you can do about it in your context.
On the other hand, PNG is also compressed but in a lossless manner. The size of the png file changes because the png compression is similar to regular file compression (called LZ): very grossly explained, it detects repeated byte patterns and encodes them in fewer bytes. Changing the bytes of your image changes these patterns, and this may change the efficiency of the compression. You could as well see an increase in size. But when an application opens your modified image, it should see exactly the bytes that you have stored.
Is the change of size a concern because this might allow someone to detect your modifications? In that case, I don't see any other solution than using only uncompressed formats.
I want to create an android app that takes 2 pictures (taken from the phone camera). Takes the top part of pic1 and the bottom part of pic2 and combines them to the final picture.
I'm thinking about converting each image to byte array. Then take the half values from the array of the first image and the other half from the other image, merge them in the final array and convert that array back to image. Is it feasible? Is this a good solution or there is any better practice for this?
Well I guess I found the solution. There is a class in the Java6 API called "BufferedImage". This class has the methods: setRGB , getRGB where you can get the int value of the rgb color for the pixel you specify. This way you can get the pixel color from the image you want and set it in the target image.
Try using OpenCV. It will be very fast since it will be handling the images in native code. Convert Bitmap objects into Matrix(OpenCV) object and send the address to the native code where you can do these computations very easily. If any code is required, do let me know.
I have written a program that takes a 'photo' and for every pixel it chooses to insert an image from a range of other photos. The image chosen is the photo of which the average colour is closest to the original pixel from the photograph.
I have done this by firstly averaging the rgb values from every pixel in 'stock' image and then converting it to CIE LAB so i could calculate the how 'close' it is to the pixel in question in terms of human perception of the colour.
I have then compiled an image where each pixel in the original 'photo' image has been replaced with the 'closest' stock image.
It works nicely and the effect is good however the stock image size is 300 by 300 pixels and even with the virtual machine flags of "-Xms2048m -Xmx2048m", which yes I know is ridiculus, on 555px by 540px image I can only replace the stock images scaled down to 50 px before I get an out of memory error.
So basically I am trying to think of solutions. Firstly I think the image effect itself may be improved by averaging every 4 pixels (2x2 square) of the original image into a single pixel and then replacing this pixel with the image, as this way the small photos will be more visible in the individual print. This should also allow me to draw the stock images at a greater size. Does anyone have any experience in this sort of image manipulation? If so what tricks have you discovered to produce a nice image.
Ultimately I think the way to reduce the memory errors would be to repeatedly save the image to disk and append the next line of images to the file whilst continually removing the old set of rendered images from memory. How can this be done? Is it similar to appending a normal file.
Any help in this last matter would be greatly appreciated.
Thanks,
Alex
I suggest looking into the Java Advanced Imaging (JAI) API. You're probably using BufferedImage right now, which does keep everything in memory: source images as well as output images. This is known as "immediate mode" processing. When you call a method to resize the image, it happens immediately. As a result, you're still keeping the stock images in memory.
With JAI, there are two benefits you can take advantage of.
Deferred mode processing.
Tile computation.
Deferred mode means that the output images are not computed right when you call methods on the images. Instead, a call to resize an image creates a small "operator" object that can do the resizing later. This lets you construct chains, trees, or pipelines of operations. So, your work would build a tree of operations like "crop, resize, composite" for each stock image. The nice part is that the operations are just command objects so you aren't consuming all the memory while you build up your commands.
This API is pull-based. It defers computation until some output action pulls pixels from the operators. This quickly helps save time and memory by avoiding needless pixel operations.
For example, suppose you need an output image that is 2048 x 2048 pixels, scaled up from a 512x512 crop out of a source image that's 1600x512 pixels. Obviously, it doesn't make sense to scale up the entire 1600x512 source image, just to throw away 2/3 of the pixels. Instead, the scaling operator will have a "region of interest" (ROI) based on it's output dimensions. The scaling operator projects the ROI onto the source image and only computes those pixels.
The commands must eventually get evaluated. This happens in a few situations, mostly relating to output of the final image. So, asking for a BufferedImage to display the output on the screen will force all the commands to evaluate. Similarly, writing the output image to disk will force evaluation.
In some cases, you can keep the second benefit of JAI, which is tile based rendering. Whereas BufferedImage does all its work right away, across all pixels, tile rendering just operates on rectangular sections of the image at a time.
Using the example from before, the 2048x2048 output image will get broken into tiles. Suppose these are 256x256, then the entire image gets broken into 64 tiles. The JAI operator objects know how to work a tile at a tile. So, scaling the 512x512 section of the source image really happens 64 times on 64x64 source pixels at a time.
Computing a tile at a time means looping across the tiles, which would seem to take more time. However, two things work in your favor when doing tile computation. First, tiles can be evaluated on multiple threads concurrently. Second, the transient memory usage is much, much lower than immediate mode computation.
All of which is a long-winded explanation for why you want to use JAI for this type of image processing.
A couple of notes and caveats:
You can defeat tile based rendering without realizing it. Anywhere you've got a BufferedImage in the workstream, it cannot act as a tile source or sink.
If you render to disk using the JAI or JAI Image I/O operators for JPEG, then you're in good shape. If you try to use the JDK's built-in image classes, you'll need all the memory. (Basically, avoid mixing the two types of image manipulation. Immediate mode and deferred mode don't mix well.)
All the fancy stuff with ROIs, tiles, and deferred mode are transparent to the program. You just make API call on the JAI class. You only deal with the machinery if you need more control over things like tile sizes, caching, and concurrency.
Here's a suggestion that might be useful;
Try segregating the two main tasks into individual programs. Your first task is to decide which images go where, and that can be a simple mapping from coordinates to filenames, which can be represented as lines of text:
0,0,image123.jpg
0,1,image542.jpg
.....
After that task is done (and it sounds like you have it well handled), then you can have a separate program handle the compilation.
This compilation could be done by appending to an image, but you probably don't want to mess around with file formats yourself. It's better to let your programming environment do it by using a Java Image object of some sort. The biggest one you can fit in memory pixelwise will be 2GB leading to sqrt(2x10^9) maximum height and width. From this number and dividing by the number of images you have for height and width, you will get the overall pixels per subimage allowed., and can paint them into the appropriate places.
Every time you 'append' are you perhaps implicitly creating a new object with one more pixel to replace the old one (ie, a parallel to the classic problem of repeatedly appending to a String instead of using a StringBuilder) ?
If you post the portion of your code that does the storing and appending, someone will probably help you find an efficient way of recoding it.
I'm trying to display a big image file in a J2ME application. But I found that when the image file is too big I can not even create the Image instance and get a OutOfMemory exception.
I suppose I can read the image file in small chunks and create a thumbnail to show to the user?
Is there a way to do this? Or is there any other method to display the image file in the application?
Thanks.
There are a number of things to try, depending on exactly what you are trying to do and what handset you want your application to run on.
If your image is packaged inside your MIDlet JAR file, You have less control over what the MIDP runtime does because the data needs to be unzipped before it can be loaded as an Image. In that case, I would suggest simply packaging a smaller image. Either reduce the number of pixels or the number of bytes used to encode each pixel.
If you can read the image bytes from a GCF-based InputStream (file, network...), You need to understand the image format (BMP is straightforward, JPEG less so...) so you can scale it down into a mutable Image object that would take less memory, one chunk at a time.
In that case, you also need to decide what your scaling algorithm should be. Turning 32 bits pixels in a file into 8 bits pixels in memory might not actually work as expected if the LCDUI implementation on your mobile phone was badly written.
Depending on the content of the image, simply removing half of the pixel columns and half of the pixel lines may be either exactly what you need or way too naive an approach. You may want to look at existing image scaling algorithms and write one into your application.
Remember that basic LCDUI may not be the only way to display an image on the screen. JSR-184, JSR-239, JSR-226 and eSWT could all allow you to do that in a way that may be totally independant from your handset LCDUI implementation.
Finally, let's face it, if your phone MIDP runtime doesn't allow you to create at least 2 images the size of your screen at full color depth at the same time, then it might be time to decide to not support that specific handset.