I want to scale a large 1920x1080 buffered image into smaller 200x200 size using progressive bicubic approach.
I start with the 1024x768 and scale down to nearly 80% of original and then want to store this temp image somewhere in some format so that in next iteration i perform again the scaling to 80% on this image and continuing the procedure till i obtain 200x200 image which i finally display on my JFrame.
WHAT IS THE METHOD OR WAY TO STORE THIS TEMP IMAGE?OR CAN ANYONE SUGGEST A SIMPLE APPROACH TO IMPLEMENT THIS PROGRESSIVE BICUBIC SCALING.
The expected code looks similar to this(though it needs various modifications ,i need just the way to store temp image)
int sizew=1920,sizeh=1080;
int deltaw = (int)(0.20 *1920);
int deltah= (int)(0.20*1920);
while(sizew>200&&sizeh>200)
{
sizew=sizew-deltaw;
sizeh=sizeh-deltah;
if(sizew<200||sizeh<200)
{
sizew=200;
sizeh=200;
temp=new BufferedImage(sizew,sizeh,BufferedImage.TYPE_INT_RGB);
//but using this how would i give reference to my original 1920x1080 image or temp image???
break;
}
else
temp=new BufferedImage(sizew,sizeh,BufferedImage.TYPE_INT_RGB);
}
No easy task; here's an outline of the brute-force approach:
Tile the image into manageable pieces using an available approach suited to the source, for example
Java getSubImage(), seen here.
Ossim, designed for geodetic data, but usable for imagery.
Resample the tiles as warranted by the intended use, for example
AffineTransform, seen here, using TYPE_NEAREST_NEIGHBOR.
ImageJ, which may be scripted.
Reassemble the tiles; the approach depends on the destination.
Related
I am looking for the simplest (and still non-problematic) way to resize a BufferedImage in Java.
In some answer to a question, the user coobird suggested the following solution, in his words (very slightly changed by me):
**
The Graphics object has a method to draw an Image while also performing a resize operation:
Graphics.drawImage(Image, int, int, int, int, ImageObserver)
method can be used to specify the location along with the size of the image when drawing.
So, we could use a piece of code like this:
BufferedImage originalImage = // .. created somehow
BufferedImage newImage = new BufferedImage(SMALL_SIZE, SMALL_SIZE, BufferedImage.TYPE_INT_RGB);
Graphics g = newImage.createGraphics();
g.drawImage(originalImage, 0, 0, SMALL_SIZE, SMALL_SIZE, null);
g.dispose();
This will take originalImage and draw it on the newImage with the width and height of SMALL_SIZE.
**
This solution seems rather simple. I have two questions about it:
Will it also work (using the exact same code), if I want to resize an image to a larger size, not only a smaller one?
Are there any problems with this solution?
If there is a better way to do this, please suggest it.
Thanks
The major problem with single step scaling is they don't generally produce quality output, as they focus on taking the original and squeezing into a smaller space, usually by dropping out a lot of pixel information (different algorithms do different things, so I'm generalizing)
Will drawGraphics scale up and down, yes, will it do it efficiently or produce a quality output? These will come down to implementation, generally speaking, most of the scaling algorithms used by default are focused on speed. You can effect these in a little way, but generally, unless you're scaling over a small range, the quality generally suffers (from my experience).
You can take a look at The Perils of Image.getScaledInstance() for more details and discussions on the topic.
Generally, what is generally recommend is to either use a dedicated library, like imgscalr, which, from the ten minutes I've played with it, does a pretty good job or perform a stepped scale.
A stepped scale basically steps the image up or down by the power of 2 until it reaches it's desired size. Remember, scaling up is nothing more then taking a pixel and enlarging it a little, so quality will always be an issue if you scale up to a very large size.
For example...
Quality of Image after resize very low -- Java
Scale the ImageIcon automatically to label size
Java: JPanel background not scaling
Remember, any scaling is generally an expensive operation (based on the original and target size of the image), so it is generally best to try and do those operations out side of the paint process and in the background where possible.
There is also the question whether you want to maintain the aspect ratio of the image? Based on you example, the image would be scaled in a square manner (stretched to meet to the requirements of the target size), this is generally not desired. You can pass -1 to either the width or height parameter and the underlying algorithm will maintain the aspect ratio of the original image or you could simply take control and make more determinations over whether you want to fill or fit the image to a target area, for example...
Java: maintaining aspect ratio of JPanel background image
In general, I avoid using drawImage or getScaledInstance most of the time (if your scaling only over a small range or want to do a low quality, fast scale, these can work) and rely more on things like fit/fill a target area and stepped scaling. The reason for using my own methods simply comes down to not always being allowed to use outside libraries. Nice not to have to re-invent the wheel where you can
It will enlarge the original if you set the parameters so. But: you should use some smart algorithm which preserves edges because simply enlarging an image will make it blurry and will result in worse perceived quality.
No problems. Theoretically this can even be hardware-accelerated on certain platforms.
I want to try up scale image up to 3 times.
For example,
Up Scaled Image
I am using this library for Image Resizing.
The following code snipped does the trick,
public static BufferedImage getScaledSampledImage(BufferedImage img,
int targetWidth, int targetHeight, boolean higherQuality) {
ResampleOp resampleOp = new ResampleOp(targetWidth, targetHeight);
resampleOp.setUnsharpenMask(AdvancedResizeOp.UnsharpenMask.Normal);
BufferedImage rescaledImage = resampleOp.filter(img, null);
return rescaledImage;
}
You can see there a resized images are lower quality. I want that I can up scale images at least 3 times than the original image without quality lost.
Is it Pposible? Should I need to change existing library ?
Thanks
The fact is that when you scale an image up, it has to create data where there was none. (Roughly speaking). To do this, you have to interpolate between the existing pixels to fill in the new ones. You may be able to get better results by trying different kinds of interpolation - but the best method will be different for different kinds of images.
Since your sample image is just squares, nearest-neighbour interpolation will likely give you the best results. If you are scaling up an image of scenery or maybe a portrait you'll need a more clever algorithm.
The result will rarely look perfect, and it's best to start with a larger image if possible!
Take a look here to get an understanding of the problem. http://en.wikipedia.org/wiki/Image_scaling#Scaling_methods
I do not know java (usually write in c)
How can I do efficiently some way of blitting
pixel array content onto a window in java?
I need (in loop) blit pixels[][] onto a window
I could use something like
pixels[][] -> MemoryImageSource -> Image -> drawImage
but creating and deleting MemoryImageSource and Image
in every frame seems strange to me - how it can be
done simply and reasonably efficiently? Could someone
give a code example, tnx
Normally in Java it's easier to work with the native Image types and use their derived graphics. Behind the scenes Java uses blits as well, so the higher level abstractions is made to easen the workload.
But if there's no way to abstract on the pixel data you can use Raster and WritableRaster (where you can replace portions of the array) as an alternative to your solution. These rasters can be used with a BufferedImage which then can be drawn using the drawImage method you mentioned. I found one way of doing it here which basically creates the Image and then retrieves the raster for future manipulation.
int x, y = 100;
BufferedImage image = new BufferedImage(x, y, BufferedImage.TYPE_INT_RGB);
WritableRaster raster = image.getRaster();
That raster (or just small areas of it) can then be manipulated and repainted.
This might improve performance slightly since the distance from you pixel-array to the screen is shorter. But I think very few people fully understands the entire depths of the AWT api - and it all depends on the native implementations of course - so my idea contains a healthy part of speculation ;-)
But I hope it helped..
For speed, you can pre-compute variations of the ColorModel, as shown in this example.
I have written a program that takes a 'photo' and for every pixel it chooses to insert an image from a range of other photos. The image chosen is the photo of which the average colour is closest to the original pixel from the photograph.
I have done this by firstly averaging the rgb values from every pixel in 'stock' image and then converting it to CIE LAB so i could calculate the how 'close' it is to the pixel in question in terms of human perception of the colour.
I have then compiled an image where each pixel in the original 'photo' image has been replaced with the 'closest' stock image.
It works nicely and the effect is good however the stock image size is 300 by 300 pixels and even with the virtual machine flags of "-Xms2048m -Xmx2048m", which yes I know is ridiculus, on 555px by 540px image I can only replace the stock images scaled down to 50 px before I get an out of memory error.
So basically I am trying to think of solutions. Firstly I think the image effect itself may be improved by averaging every 4 pixels (2x2 square) of the original image into a single pixel and then replacing this pixel with the image, as this way the small photos will be more visible in the individual print. This should also allow me to draw the stock images at a greater size. Does anyone have any experience in this sort of image manipulation? If so what tricks have you discovered to produce a nice image.
Ultimately I think the way to reduce the memory errors would be to repeatedly save the image to disk and append the next line of images to the file whilst continually removing the old set of rendered images from memory. How can this be done? Is it similar to appending a normal file.
Any help in this last matter would be greatly appreciated.
Thanks,
Alex
I suggest looking into the Java Advanced Imaging (JAI) API. You're probably using BufferedImage right now, which does keep everything in memory: source images as well as output images. This is known as "immediate mode" processing. When you call a method to resize the image, it happens immediately. As a result, you're still keeping the stock images in memory.
With JAI, there are two benefits you can take advantage of.
Deferred mode processing.
Tile computation.
Deferred mode means that the output images are not computed right when you call methods on the images. Instead, a call to resize an image creates a small "operator" object that can do the resizing later. This lets you construct chains, trees, or pipelines of operations. So, your work would build a tree of operations like "crop, resize, composite" for each stock image. The nice part is that the operations are just command objects so you aren't consuming all the memory while you build up your commands.
This API is pull-based. It defers computation until some output action pulls pixels from the operators. This quickly helps save time and memory by avoiding needless pixel operations.
For example, suppose you need an output image that is 2048 x 2048 pixels, scaled up from a 512x512 crop out of a source image that's 1600x512 pixels. Obviously, it doesn't make sense to scale up the entire 1600x512 source image, just to throw away 2/3 of the pixels. Instead, the scaling operator will have a "region of interest" (ROI) based on it's output dimensions. The scaling operator projects the ROI onto the source image and only computes those pixels.
The commands must eventually get evaluated. This happens in a few situations, mostly relating to output of the final image. So, asking for a BufferedImage to display the output on the screen will force all the commands to evaluate. Similarly, writing the output image to disk will force evaluation.
In some cases, you can keep the second benefit of JAI, which is tile based rendering. Whereas BufferedImage does all its work right away, across all pixels, tile rendering just operates on rectangular sections of the image at a time.
Using the example from before, the 2048x2048 output image will get broken into tiles. Suppose these are 256x256, then the entire image gets broken into 64 tiles. The JAI operator objects know how to work a tile at a tile. So, scaling the 512x512 section of the source image really happens 64 times on 64x64 source pixels at a time.
Computing a tile at a time means looping across the tiles, which would seem to take more time. However, two things work in your favor when doing tile computation. First, tiles can be evaluated on multiple threads concurrently. Second, the transient memory usage is much, much lower than immediate mode computation.
All of which is a long-winded explanation for why you want to use JAI for this type of image processing.
A couple of notes and caveats:
You can defeat tile based rendering without realizing it. Anywhere you've got a BufferedImage in the workstream, it cannot act as a tile source or sink.
If you render to disk using the JAI or JAI Image I/O operators for JPEG, then you're in good shape. If you try to use the JDK's built-in image classes, you'll need all the memory. (Basically, avoid mixing the two types of image manipulation. Immediate mode and deferred mode don't mix well.)
All the fancy stuff with ROIs, tiles, and deferred mode are transparent to the program. You just make API call on the JAI class. You only deal with the machinery if you need more control over things like tile sizes, caching, and concurrency.
Here's a suggestion that might be useful;
Try segregating the two main tasks into individual programs. Your first task is to decide which images go where, and that can be a simple mapping from coordinates to filenames, which can be represented as lines of text:
0,0,image123.jpg
0,1,image542.jpg
.....
After that task is done (and it sounds like you have it well handled), then you can have a separate program handle the compilation.
This compilation could be done by appending to an image, but you probably don't want to mess around with file formats yourself. It's better to let your programming environment do it by using a Java Image object of some sort. The biggest one you can fit in memory pixelwise will be 2GB leading to sqrt(2x10^9) maximum height and width. From this number and dividing by the number of images you have for height and width, you will get the overall pixels per subimage allowed., and can paint them into the appropriate places.
Every time you 'append' are you perhaps implicitly creating a new object with one more pixel to replace the old one (ie, a parallel to the classic problem of repeatedly appending to a String instead of using a StringBuilder) ?
If you post the portion of your code that does the storing and appending, someone will probably help you find an efficient way of recoding it.
I have an Eclipse RCP application that displays a lot (10k+) of small images next to each other, like a film strip. For each image, I am using a SWT Image object. This uses an excessive amount of memory and resources. I am looking for a more efficient way. I thought of taking all of these images and concatenating them by creating an ImageData object of the proper total, concatenated width (with a constant height) and using setPixel() for the rest of the pixels. However, the Palette used in the ImageData constructor I can't figure out.
I also searched for SWT tiling or mosaic functionality to create one image from a group of images, but found nothing.
Any ideas how I can display thousands of small images next to each other efficiently? Please note that once the images are displayed, they are not manipulated, so this is a one-time cost.
You can draw directly on the GC (graphics context) of a new (big) image. Having one big Image should result in much less resource usage than thousands of smaller images (each image in SWT keeps some OS graphics object handle)
What you can try is something like this:
final List<Image> images;
final Image bigImage = new Image(Display.getCurrent(), combinedWidth, height);
final GC gc = new GC(bigImage);
//loop thru all the images while increasing x as necessary:
int x = 0;
int y = 0;
for (Image curImage : images) {
gc.drawImage(curImage, x, y);
x += curImage.getBounds().width;
}
//very important to dispose GC!!!
gc.dispose();
//now you can use bigImage
Presumably not every image is visible on screen at any one time? Perhaps a better solution would be to only load the images when they become (or are about to become) visible, disposing of them when they have been scrolled off the screen. Obviously you'd want to keep a few in memory on either side of the current viewport in order to make a smooth transition for the user.
I previously worked with a Java application to create photomosaics, and found it very difficult to achieve adequate performance and memory usage using the java imaging (JAI) libraries and SWT. Although we weren't using nearly as many images as you mention, one route was to rely on a utilities outside of java. In particular, you could use ImageMagick command-line utilities to stitch together your mosaic, and the load the completed memory from disk. If you want to get fancy, there is also a C++ API for ImageMagick, which is very efficient in memory.