I'm trying to display a big image file in a J2ME application. But I found that when the image file is too big I can not even create the Image instance and get a OutOfMemory exception.
I suppose I can read the image file in small chunks and create a thumbnail to show to the user?
Is there a way to do this? Or is there any other method to display the image file in the application?
Thanks.
There are a number of things to try, depending on exactly what you are trying to do and what handset you want your application to run on.
If your image is packaged inside your MIDlet JAR file, You have less control over what the MIDP runtime does because the data needs to be unzipped before it can be loaded as an Image. In that case, I would suggest simply packaging a smaller image. Either reduce the number of pixels or the number of bytes used to encode each pixel.
If you can read the image bytes from a GCF-based InputStream (file, network...), You need to understand the image format (BMP is straightforward, JPEG less so...) so you can scale it down into a mutable Image object that would take less memory, one chunk at a time.
In that case, you also need to decide what your scaling algorithm should be. Turning 32 bits pixels in a file into 8 bits pixels in memory might not actually work as expected if the LCDUI implementation on your mobile phone was badly written.
Depending on the content of the image, simply removing half of the pixel columns and half of the pixel lines may be either exactly what you need or way too naive an approach. You may want to look at existing image scaling algorithms and write one into your application.
Remember that basic LCDUI may not be the only way to display an image on the screen. JSR-184, JSR-239, JSR-226 and eSWT could all allow you to do that in a way that may be totally independant from your handset LCDUI implementation.
Finally, let's face it, if your phone MIDP runtime doesn't allow you to create at least 2 images the size of your screen at full color depth at the same time, then it might be time to decide to not support that specific handset.
Related
I had a question concerning jpg image creation ImageIO.write(imgStega, "jpeg", file) :
I am doing some steganography, and I have to hide data in least significant bit of each pixel. I do this with getRGBA()[pos], which provide me Red, Blue, Green, Alpha components. Then I change each value with a +1 or -1 depending on a %2.
The problem is, every time I use ImageIO.write, it changes all my image at random (it is compressing). So, how can I save my image as it is ? I don't see any solution to do steganography on a real image.
Whether I use png or jpg is the same, the weight changes. Do you know a way to save my image the way it is ?
Thanks in advance !
JPEG is lossy by definition, so the data modifications that you see are expected and there is not much you can do about it in your context.
On the other hand, PNG is also compressed but in a lossless manner. The size of the png file changes because the png compression is similar to regular file compression (called LZ): very grossly explained, it detects repeated byte patterns and encodes them in fewer bytes. Changing the bytes of your image changes these patterns, and this may change the efficiency of the compression. You could as well see an increase in size. But when an application opens your modified image, it should see exactly the bytes that you have stored.
Is the change of size a concern because this might allow someone to detect your modifications? In that case, I don't see any other solution than using only uncompressed formats.
I have a widget with ViewFlipper that flips between X number of images. I aim for 10 images to be flipped, and I can do this if I load very small images. My widget is size 4x2 and I want to display images with good quality, but I can't achieve this. Everything loads fine, no exceptions, but the widget never displays them. If I load very small image sizes (100x100 px), it starts flipping them. If I load larger image size (300x300), it won't start flipping the images until I reduce the number of images (flips) to 4.
This suggests a memory limitation to me, but I would expect an exception to be thrown somewhere after I do appWidgetManager.updateWidget(widgetId, remoteViewFlipper).
Going through the logs, I don't see anything nearly related to this.
I think it depends on concrete implementation of launcher - widget stuff is hosted and processsed there. You updtes are send as parcelables, so there soule be data size limit as well:
http://groups.google.com/group/android-developers/browse_thread/thread/26ce74534024f41a?pli=1
i'm developing a software that compare images and i need to do it in a fast way! Actually i compare them using plain c but it's too slow.
I want to compare them using shaders and a couple of gl surfaces (textures), using c and not java, but this doesn't change the situation so much, and get back a list of changed parts, but i really don't know where to start.
Basically i want to use something like SIMD neon instruction to compare pixel colors to check for changes (well, i need to check only the first pixel fragment color, ex. only red ... these are photos so is unrealistic that it doesn't change) but instead to use neon instructions i want to use pixel shaders to do the comparison and get the list of changed part back
More, if it's possible, i want to use parallel comparison on the same image splitting it in blocks :)
Someone can give an hit?
note: i know that i can't output back a list of stuff, but, well, use a third texture as output is good anyway for me (if i put on the texture 2 ushorts that indicates x and y i'm ok and with an uint on the end of the texture that report the number of changed pixels)
OpenGL ES 1.1 doesn't have shaders, and the best route I can think of for what you want to do ends with a 50% reduction in colour precision. Issues are:
without extensions there's additive blending, but not subtractive. No problem, just upload the second of your textures with all colour values inverted.
OpenGL clamps output colours to the range [0, 1] and without extensions you're limited to one byte per channel. So you'd need to upload textures with 7bit colour channels to ensure you got the correct results within the 8bits coming back.
Shaders would allow a slightly circuitous route around that, because you can add or subtract or do whatever you want, and can split up the results. If you're sending two three channel 24bit images in to get a four channel 32bit image out, obviously there's enough space to fit in 9 bits per source channel, even though you're going to have to divide the data oddly and reconstruct it later.
In practice you're going to pay quite a lot for uploading and downloading images from the GPU, so NEON might be a better choice not just to avoid packing peculiarities. Assuming the Android kit supplies the same compiler intrinsics as the iPhone kit (likely, since they'll both include GCC), this page has a bit of an introduction showing how to convert an image to greyscale. So it's not exactly what you're looking for, but it's image processing in C using NEON so it should be a good start.
In both cases you're likely to end up with an image of the differences, rather than a simple count and list. A count is a concurrent operation, whatever way you think about it, so isn't
really something you'd do in GL or via NEON. You'd need to inspect the final image to work it out.
I have written a program that takes a 'photo' and for every pixel it chooses to insert an image from a range of other photos. The image chosen is the photo of which the average colour is closest to the original pixel from the photograph.
I have done this by firstly averaging the rgb values from every pixel in 'stock' image and then converting it to CIE LAB so i could calculate the how 'close' it is to the pixel in question in terms of human perception of the colour.
I have then compiled an image where each pixel in the original 'photo' image has been replaced with the 'closest' stock image.
It works nicely and the effect is good however the stock image size is 300 by 300 pixels and even with the virtual machine flags of "-Xms2048m -Xmx2048m", which yes I know is ridiculus, on 555px by 540px image I can only replace the stock images scaled down to 50 px before I get an out of memory error.
So basically I am trying to think of solutions. Firstly I think the image effect itself may be improved by averaging every 4 pixels (2x2 square) of the original image into a single pixel and then replacing this pixel with the image, as this way the small photos will be more visible in the individual print. This should also allow me to draw the stock images at a greater size. Does anyone have any experience in this sort of image manipulation? If so what tricks have you discovered to produce a nice image.
Ultimately I think the way to reduce the memory errors would be to repeatedly save the image to disk and append the next line of images to the file whilst continually removing the old set of rendered images from memory. How can this be done? Is it similar to appending a normal file.
Any help in this last matter would be greatly appreciated.
Thanks,
Alex
I suggest looking into the Java Advanced Imaging (JAI) API. You're probably using BufferedImage right now, which does keep everything in memory: source images as well as output images. This is known as "immediate mode" processing. When you call a method to resize the image, it happens immediately. As a result, you're still keeping the stock images in memory.
With JAI, there are two benefits you can take advantage of.
Deferred mode processing.
Tile computation.
Deferred mode means that the output images are not computed right when you call methods on the images. Instead, a call to resize an image creates a small "operator" object that can do the resizing later. This lets you construct chains, trees, or pipelines of operations. So, your work would build a tree of operations like "crop, resize, composite" for each stock image. The nice part is that the operations are just command objects so you aren't consuming all the memory while you build up your commands.
This API is pull-based. It defers computation until some output action pulls pixels from the operators. This quickly helps save time and memory by avoiding needless pixel operations.
For example, suppose you need an output image that is 2048 x 2048 pixels, scaled up from a 512x512 crop out of a source image that's 1600x512 pixels. Obviously, it doesn't make sense to scale up the entire 1600x512 source image, just to throw away 2/3 of the pixels. Instead, the scaling operator will have a "region of interest" (ROI) based on it's output dimensions. The scaling operator projects the ROI onto the source image and only computes those pixels.
The commands must eventually get evaluated. This happens in a few situations, mostly relating to output of the final image. So, asking for a BufferedImage to display the output on the screen will force all the commands to evaluate. Similarly, writing the output image to disk will force evaluation.
In some cases, you can keep the second benefit of JAI, which is tile based rendering. Whereas BufferedImage does all its work right away, across all pixels, tile rendering just operates on rectangular sections of the image at a time.
Using the example from before, the 2048x2048 output image will get broken into tiles. Suppose these are 256x256, then the entire image gets broken into 64 tiles. The JAI operator objects know how to work a tile at a tile. So, scaling the 512x512 section of the source image really happens 64 times on 64x64 source pixels at a time.
Computing a tile at a time means looping across the tiles, which would seem to take more time. However, two things work in your favor when doing tile computation. First, tiles can be evaluated on multiple threads concurrently. Second, the transient memory usage is much, much lower than immediate mode computation.
All of which is a long-winded explanation for why you want to use JAI for this type of image processing.
A couple of notes and caveats:
You can defeat tile based rendering without realizing it. Anywhere you've got a BufferedImage in the workstream, it cannot act as a tile source or sink.
If you render to disk using the JAI or JAI Image I/O operators for JPEG, then you're in good shape. If you try to use the JDK's built-in image classes, you'll need all the memory. (Basically, avoid mixing the two types of image manipulation. Immediate mode and deferred mode don't mix well.)
All the fancy stuff with ROIs, tiles, and deferred mode are transparent to the program. You just make API call on the JAI class. You only deal with the machinery if you need more control over things like tile sizes, caching, and concurrency.
Here's a suggestion that might be useful;
Try segregating the two main tasks into individual programs. Your first task is to decide which images go where, and that can be a simple mapping from coordinates to filenames, which can be represented as lines of text:
0,0,image123.jpg
0,1,image542.jpg
.....
After that task is done (and it sounds like you have it well handled), then you can have a separate program handle the compilation.
This compilation could be done by appending to an image, but you probably don't want to mess around with file formats yourself. It's better to let your programming environment do it by using a Java Image object of some sort. The biggest one you can fit in memory pixelwise will be 2GB leading to sqrt(2x10^9) maximum height and width. From this number and dividing by the number of images you have for height and width, you will get the overall pixels per subimage allowed., and can paint them into the appropriate places.
Every time you 'append' are you perhaps implicitly creating a new object with one more pixel to replace the old one (ie, a parallel to the classic problem of repeatedly appending to a String instead of using a StringBuilder) ?
If you post the portion of your code that does the storing and appending, someone will probably help you find an efficient way of recoding it.
I have a very large image and I only want to display a section the size of the display (no scaling), and the section should just be the center of the image. Because the image is very large I cannot read the entire image into memory and then crop it. This is what I have so far but it will give OutOfMemory for large images. Also I don't think inSampleSize applies because I want to crop the image, not lower the resolution.
Uri data = getIntent().getData();
InputStream is = getContentResolver().openInputStream(data);
Bitmap bitmap = BitmapFactory.decodeStream(is, null, null);
Any help would be great?
You can do this in 2 steps:
get the size of the bitmap, by using inJustDecodeBounds=true .
use BitmapRegionDecoder to decode just the part you want to .
The downside ? it works only from API 10 (but it's already the majority...).
I agree that the easiest way is to break the image up into many smaller tiled images and to just load the appropriate ones to make the image you are after.
However, if you do not want to do that, you may be forced to look into the encoding of the jpeg itself.
What you could do is something along the lines of copying the header from the file into a new file, and then extracting just the pixels you want in order to create a new file. Then reloading the new file will allow you to have just the subset of the image you are looking to work with, and all the regular java functionality and classes will be equally available for you to use.
I know it isn't necessarily an elegant or simple solution, however it does guarantee that you will be able to use the original java functionality which you expect to be able to use.
I think you're approaching the problem from the wrong direction.
If the bitmap is already so large it can't be loaded as a single continuous image, why store it as a single image? Slice it into tiles then load the center tile/tiles and act upon those.