Is there a memory limit available for widgets? - java

I have a widget with ViewFlipper that flips between X number of images. I aim for 10 images to be flipped, and I can do this if I load very small images. My widget is size 4x2 and I want to display images with good quality, but I can't achieve this. Everything loads fine, no exceptions, but the widget never displays them. If I load very small image sizes (100x100 px), it starts flipping them. If I load larger image size (300x300), it won't start flipping the images until I reduce the number of images (flips) to 4.
This suggests a memory limitation to me, but I would expect an exception to be thrown somewhere after I do appWidgetManager.updateWidget(widgetId, remoteViewFlipper).
Going through the logs, I don't see anything nearly related to this.

I think it depends on concrete implementation of launcher - widget stuff is hosted and processsed there. You updtes are send as parcelables, so there soule be data size limit as well:
http://groups.google.com/group/android-developers/browse_thread/thread/26ce74534024f41a?pli=1

Related

What width should my images be to be neat on my app?

I have a recycler view on my android app main page that displays a list of pictures and I can't decide which width to give my images so that they are not too heavy but still very neat. (My images take the whole width of the screen.)
I know every Android device has a different format but I need a one-size-fits-all.
I currently have images on my res folder with width 500px (and approx. 350px height). The images weight between 30ko and 100ko. But I must admit they are not very neat ...
So I guess I have to pack pictures with a greater width to gain quality, but I have no idea how much.
How does Instagram does it ? Pictures are always very neat, how do they do this ? What are the characteristics of their pictures ? I guess they weight a ton, no ? (like 500ko per image ?) or am I wrong ?
There are utilities ('Image Optimizers') that shrink jpeg images without any significant quality loss, you can find some of them on Google for manual use.
When showing a lot of images, you can use a library such as Compressor to do this for you. I believe that apps such as Instagram might have their own code but same same. The file sizes are usually 15-300 kb, depending on the 'complexity' of the image.
After spending quite some time testing on different sizes and weights, my conclusion is that 500px wide images are not displayed very well on devices in general when you want to have image that take the whole width of the phone. But you can have light images (between 30Ko and 80Ko).
Below 500px width is not a good idea.
When you go up to 1000px wide images, you get a whole better quality, that gives your app a better look and feel in my opinion. But obviously images are somewhat bigger (between 80 and 190Ko)
When you go up to 1500px wide images, you have to be careful to the weight of the image. I wouldn't recommand having images that weight more than 200 Ko. But at least you got very good quality ... which is something nice :D
I hope that could help

Progressive Image Loading on Android

I have a ViewPager that I'm using to page through several ImageViews (around 1000) which each load a full-screen Bitmap. I'm subsampling the images using options.inSampleSize and using a memcache for performance but this is still not fast enough. What I'm trying to avoid is the user seeing blank ImageViews when they page through the views quickly.
I'm wondering if there is a way to serve a partially rendered image (or low quality) while the requested image loads.
I've noticed the Android gallery incorporates something like this (especially noticeable when zooming in and out, loading looks similar to loading of a progressive JPEG on the web), but I haven't been able to find any related information.
Android does not currently provide a way to get intermediate images during decoding (e.g. a progressive JPEG). You only get one final image.
You are already using inSampleSize to get more coarse versions of the image. This makes the decode hopefully quicker, although there are no time guarantees: it uses libjpeg under the hood. Have you measured how long each image decode takes? Have you tried setting inPreferQualityOverSpeed to false? Are you reusing the same bitmap (also may speed up decode). You can also check if inSampleSize makes a different in speed.
In any case, any work of getting low-quality images quickly gets lost if you then want a more detailed version of the same image, as you have to start the decode from the beginning. Therefore, you want to limit decoding to the visible images plus its near neighbors so that you can scroll without blank images.

Animated background using Libgdx

I'm creating a UI system for an android game that will have a large (up to 4096x4096) background area in which menus can be placed anywhere within that screen and a camera will fly to that location when a different menu is needed. Instead of having a large static image, I'd like to be able to animate this slightly. What I'd like to know is how to do this efficiently without lagging up the device. These are the methods I've come up with so far, but maybe there is something better..
1) Have 3 separate 4096x4096 static layers for the background, 1 is the sky, one is the terrain, one is things like clouds and trees. Each layer is placed on top of each other with a slight difference in Z space to give a little parallax effect when the camera moves.
2) Have a large stationary background image, with a layer on top of that with individual specific sprites of clouds, trees and other things that should be animated. I think this might be the most efficient route, as I can choose not to animate parts that are not in view, but it will also limit re-usability as every different object will have to be placed manually in space. My goal is to be able to simply change the assets and be able to have a whole new game.
3) Have 1 large background layer with several frames that plays almost like a video. I feel like this will be the worst on performance(loading several 4096x4096 frames and drawing a different one 30 times a second), but would give me the scene exactly how I want it directly out of After Effects. I doubt this one is even feasible, not just because of the drawing but storage space on android devices just for the menu UI wouldn't allow for several 6MB frames.
Are any of these in the right direction? I have seen a few similar questions asked but none fit close enough to what I needed(A large, moving background that isn't made of tiles).
Any help is appreciated.
As far as your question is tagged for Android, I would recommend the 2nd solution.
The main reason is that solution #1 and #3 involve loading numerous 4096x4096 textures.
Quick calcultation: three 32bit textures with such resolution would use at least 200MB of Video RAM. It means that you can immediatly discard a lot of android devices.
On the other hand, the solution #2 would involve only two big textures: a large stationary background image, and a texture atlas containing specific sprites of clouds, trees...
This solution is really more memory friendly, and will lead to the same aestetic output.
TL;DR: the 3 solutions would work great but only the #2 would fit an embedded device

Appending to an Image File

I have written a program that takes a 'photo' and for every pixel it chooses to insert an image from a range of other photos. The image chosen is the photo of which the average colour is closest to the original pixel from the photograph.
I have done this by firstly averaging the rgb values from every pixel in 'stock' image and then converting it to CIE LAB so i could calculate the how 'close' it is to the pixel in question in terms of human perception of the colour.
I have then compiled an image where each pixel in the original 'photo' image has been replaced with the 'closest' stock image.
It works nicely and the effect is good however the stock image size is 300 by 300 pixels and even with the virtual machine flags of "-Xms2048m -Xmx2048m", which yes I know is ridiculus, on 555px by 540px image I can only replace the stock images scaled down to 50 px before I get an out of memory error.
So basically I am trying to think of solutions. Firstly I think the image effect itself may be improved by averaging every 4 pixels (2x2 square) of the original image into a single pixel and then replacing this pixel with the image, as this way the small photos will be more visible in the individual print. This should also allow me to draw the stock images at a greater size. Does anyone have any experience in this sort of image manipulation? If so what tricks have you discovered to produce a nice image.
Ultimately I think the way to reduce the memory errors would be to repeatedly save the image to disk and append the next line of images to the file whilst continually removing the old set of rendered images from memory. How can this be done? Is it similar to appending a normal file.
Any help in this last matter would be greatly appreciated.
Thanks,
Alex
I suggest looking into the Java Advanced Imaging (JAI) API. You're probably using BufferedImage right now, which does keep everything in memory: source images as well as output images. This is known as "immediate mode" processing. When you call a method to resize the image, it happens immediately. As a result, you're still keeping the stock images in memory.
With JAI, there are two benefits you can take advantage of.
Deferred mode processing.
Tile computation.
Deferred mode means that the output images are not computed right when you call methods on the images. Instead, a call to resize an image creates a small "operator" object that can do the resizing later. This lets you construct chains, trees, or pipelines of operations. So, your work would build a tree of operations like "crop, resize, composite" for each stock image. The nice part is that the operations are just command objects so you aren't consuming all the memory while you build up your commands.
This API is pull-based. It defers computation until some output action pulls pixels from the operators. This quickly helps save time and memory by avoiding needless pixel operations.
For example, suppose you need an output image that is 2048 x 2048 pixels, scaled up from a 512x512 crop out of a source image that's 1600x512 pixels. Obviously, it doesn't make sense to scale up the entire 1600x512 source image, just to throw away 2/3 of the pixels. Instead, the scaling operator will have a "region of interest" (ROI) based on it's output dimensions. The scaling operator projects the ROI onto the source image and only computes those pixels.
The commands must eventually get evaluated. This happens in a few situations, mostly relating to output of the final image. So, asking for a BufferedImage to display the output on the screen will force all the commands to evaluate. Similarly, writing the output image to disk will force evaluation.
In some cases, you can keep the second benefit of JAI, which is tile based rendering. Whereas BufferedImage does all its work right away, across all pixels, tile rendering just operates on rectangular sections of the image at a time.
Using the example from before, the 2048x2048 output image will get broken into tiles. Suppose these are 256x256, then the entire image gets broken into 64 tiles. The JAI operator objects know how to work a tile at a tile. So, scaling the 512x512 section of the source image really happens 64 times on 64x64 source pixels at a time.
Computing a tile at a time means looping across the tiles, which would seem to take more time. However, two things work in your favor when doing tile computation. First, tiles can be evaluated on multiple threads concurrently. Second, the transient memory usage is much, much lower than immediate mode computation.
All of which is a long-winded explanation for why you want to use JAI for this type of image processing.
A couple of notes and caveats:
You can defeat tile based rendering without realizing it. Anywhere you've got a BufferedImage in the workstream, it cannot act as a tile source or sink.
If you render to disk using the JAI or JAI Image I/O operators for JPEG, then you're in good shape. If you try to use the JDK's built-in image classes, you'll need all the memory. (Basically, avoid mixing the two types of image manipulation. Immediate mode and deferred mode don't mix well.)
All the fancy stuff with ROIs, tiles, and deferred mode are transparent to the program. You just make API call on the JAI class. You only deal with the machinery if you need more control over things like tile sizes, caching, and concurrency.
Here's a suggestion that might be useful;
Try segregating the two main tasks into individual programs. Your first task is to decide which images go where, and that can be a simple mapping from coordinates to filenames, which can be represented as lines of text:
0,0,image123.jpg
0,1,image542.jpg
.....
After that task is done (and it sounds like you have it well handled), then you can have a separate program handle the compilation.
This compilation could be done by appending to an image, but you probably don't want to mess around with file formats yourself. It's better to let your programming environment do it by using a Java Image object of some sort. The biggest one you can fit in memory pixelwise will be 2GB leading to sqrt(2x10^9) maximum height and width. From this number and dividing by the number of images you have for height and width, you will get the overall pixels per subimage allowed., and can paint them into the appropriate places.
Every time you 'append' are you perhaps implicitly creating a new object with one more pixel to replace the old one (ie, a parallel to the classic problem of repeatedly appending to a String instead of using a StringBuilder) ?
If you post the portion of your code that does the storing and appending, someone will probably help you find an efficient way of recoding it.

How to make a thumb from a big image file in J2ME?

I'm trying to display a big image file in a J2ME application. But I found that when the image file is too big I can not even create the Image instance and get a OutOfMemory exception.
I suppose I can read the image file in small chunks and create a thumbnail to show to the user?
Is there a way to do this? Or is there any other method to display the image file in the application?
Thanks.
There are a number of things to try, depending on exactly what you are trying to do and what handset you want your application to run on.
If your image is packaged inside your MIDlet JAR file, You have less control over what the MIDP runtime does because the data needs to be unzipped before it can be loaded as an Image. In that case, I would suggest simply packaging a smaller image. Either reduce the number of pixels or the number of bytes used to encode each pixel.
If you can read the image bytes from a GCF-based InputStream (file, network...), You need to understand the image format (BMP is straightforward, JPEG less so...) so you can scale it down into a mutable Image object that would take less memory, one chunk at a time.
In that case, you also need to decide what your scaling algorithm should be. Turning 32 bits pixels in a file into 8 bits pixels in memory might not actually work as expected if the LCDUI implementation on your mobile phone was badly written.
Depending on the content of the image, simply removing half of the pixel columns and half of the pixel lines may be either exactly what you need or way too naive an approach. You may want to look at existing image scaling algorithms and write one into your application.
Remember that basic LCDUI may not be the only way to display an image on the screen. JSR-184, JSR-239, JSR-226 and eSWT could all allow you to do that in a way that may be totally independant from your handset LCDUI implementation.
Finally, let's face it, if your phone MIDP runtime doesn't allow you to create at least 2 images the size of your screen at full color depth at the same time, then it might be time to decide to not support that specific handset.

Categories

Resources