Best resolution resizing image using ImgScalr - java

I am very new to ImgScalr API.I need to resize my images to different views, one of them being a mobile view and second a thumbnail view.
I have made use of the resize method, but have a doubt. Which is the best of the resize method to resizing the image out of the multiple options available that keeps the proper aspect ratio(as in the image doesnt become blurred)
One thing I noticed was that every resize method takes in a targetSize argument. How does specifiying this field make sure that the aspect ratio of the image does not get affected.
What should the ideal arguments to the resize method be, given that I need to generate a 2 KB thumbnail view of my input image that may be of size of around 2 MB.
I am a bit confused because of the lack of enough documentation and examples.

imgscalr author here - definitely understand the confusion, the code base itself (if you happen to glance at GitHub) is almost 50% comments if you are curious how the library works, but from a usage perspective you are right - I didn't put a lot of time into examples.
Hopefully I can hit some highlights quickly for you...
Aspect Ratios
A core design tenant of imgscalr is to always honor the aspect ratio - so if you pass in 200x1 (some ridiculous dimension as an example) it will attempt to calculate the minimum dimension that will meet those 'target' dimensions.
This is handy if you always want your thumbnails in a certain box, like 200x200 -- just pass that in and imgscalr will determine a final width/height that won't be bigger than that (possibly something like 200x127 or 78x200)
Quality
By default the library does what is called a 'balanced' approach to quality by considering the delta in dimension change as well as scaling up/scaling down and chooses the most approach approach (speed VS quality).
You can force it to always scale as quickly as possible (good idea for scaling up operations) or can force it to always use high or ultra quality (good idea if you want really crisp thumbnails or other operations that drastically reduce the image resolution and you want them to still look decent)
On top of that you can also ask the library to apply some additional filtering to the image (called Image Ops) -- I ship some handy defaults out of the box like the anti-aliasing one if you are getting jagged edges on a lot of source material you are scaling (common when scaling screenshots of desktops and other things with diag straight lines)
Overall
The library is meant to be as simple as possible to use, something no harder than:
BufferedImage thumbnail = Scalr.resize(src, 128);
will get you started... all the other operations around quality, fitting, modes, ops, etc. are just additional things you can chose to do if you decide the result isn't quite what you wanted.
Hope that helps!

Related

How do you utilize SWT's Hi-DPI support for widget sizing?

Does SWT (or JFace) have a public convenience method for converting conventional units to their scaled counterparts? I've found mention of a DPIUtil class but that's part of an internal namespace
If there's not a convenience method available, then is there a reliable way to access the zoom level? I see there's Device#getDeviceZoom() but that is protected. There is Device#getDPI() which is public so it might be useful. Does that take scaling into consideration, or is it naïve and just declares that DPI is 96 for everything?
I'm applying default sizing hints to some panels and I'd like them to take the monitor scaling setting into consideration. E.g., Say on a regular display I want the default to be 300px, but at 150% scaling I want to calculate it to be 450px. The calculation is obviously simple but I need the multiplier.
NOTE: This is related but different from my previous question How do you utilize SWT's Hi-DPI support for icons? because SWT provides classes to specifically handle this with images.
I haven't found anything other than DPIUtil for determining the scale (zoom) factor.
But you don't normally need this information. Specifying a size of 300px will be automatically scaled to 450px by SWT on a 150 scaled device (and any 150 scaled image you provide will be used). I have an iMac with two screens - a 5k screen scaled at 200 and a 2.5k screen not scaled - SWT apps appear the same size on both.
The scaling is actually done in the OS rather than SWT (at least that is how it works on macOS). The OS scales up the sizes, renders fonts at the higher resolution and uses the high resolution images if available. So programs don't need to do anything other than provide hi-res images.
This way even old programs that don't know about zoomed displays still appear at a sensible size.

Face Features Detection Using OpenCV Haar-cascades

I am using Java with OpenCV Library to detect Face,Eyes and Mouth using Laptop Camera.
What I have done so far:
Capture Video Frames using VideoCapture object.
Detect Face using Haar-Cascades.
Divide the Face region into Top Region and Bottom Region.
Search for Eyes inside Top region.
Search for Mouth inside Bottom region.
Problem I am facing:
At first Video is running normally and suddenly it becomes slower.
Main Questions:
Do Higher Cameras' Resolutions work better for Haar-Cascades?
Do I have to capture Video Frames in a certain scale? for example (100px X100px)?
Do Haar-Cascades work better in Gray-scale Images?
Does different lighting conditions make difference?
What does the method detectMultiScale(params) exactly do?
If I want to go for further analysis for Eye Blinking, Eye Closure Duration, Mouth Yawning, Head Nodding and Head Orientation to Detect Fatigue (Drowsiness) By Using Support Vector Machine, any advices?
Your help is appreciated!
The following article, would give you an overview of the things going under the hood, I would highly recommend to read the article.
Do Higher Cameras' Resolutions work better for Haar-Cascades?
Not necessarily, the cascade.detectMultiScale has params to adjust for various input width, height scenarios, like minSize and maxSize, These are optional params However, But you can tweak these to get robust predictions if you have control over the input image size. If you set the minSize to smaller value and ignore maxSize then it will work for smaller and high res images as well, but the performance would suffer. Also if you imagine now, How come there is no differnce between High-res and low-res images then you should consider that the cascade.detectMultiScale internally scales the images to lower resolutions for performance boost, that is why defining the maxSize and minSize is important to avoid any unnecessary iterations.
Do I have to capture Video Frames in a certain scale? for example
(100px X100px)
This mainly depends upon the params you pass to the cascade.detectMultiScale. Personally I guess that 100 x 100 would be too small for smaller face detection in the frame as some features would be completely lost while resizing the frame to smaller dimensions, and the cascade.detectMultiScale is highly dependent upon the gradients or features in the input image.
But if the input frame only has face as a major part, and there are no other smaller faces dangling behind then you may use 100 X 100. I have tested some sample faces of size 100 x 100 and it worked pretty well. And if this is not the case then 300 - 400 px width should work good. However you would need to tune the params in order to achieve accuracy.
Do Haar-Cascades work better in Gray-scale Images?
They work only in gray-scale images.
In the article, if you read the first part, you will come to know that it face detection is comprised of detecting many binary patterns in the image, This basically comes from the ViolaJones, paper which is the basic of this algorithm.
Does different lighting conditions make difference?
May be in some cases, largely Haar-features are lighting invariant.
If you are considering different lighting conditions as taking images under green or red light, then it may not affect the detection, The haar-features (since dependent on gray-scale) are independent of the RGB color of input image. The detection mainly depends upon the gradients/features in the input image. So as far as there are enough gradient differences in the input image such as eye-brow has lower intensity than fore-head, etc. it will work fine.
But consider a case when input image has back-light or very low ambient light, In that case it may be possible that some prominent features are not found, which may result in face not detected.
What does the method detectMultiScale(params) exactly do?
I guess, if you have read the article, by this time, then you must be knowing it well.
If I want to go for further analysis for Eye Blinking, Eye Closure
Duration, Mouth Yawning, Head Nodding and Head Orientation to Detect
Fatigue (Drowsiness) By Using Support Vector Machine, any advices?
No, I won't suggest you to perform these type of gesture detection with SVM, as it would be extremely slow to run 10 different cascades to conclude current facial state, However I would recommend you to use some Facial Landmark Detection Framework, such as Dlib, You may search for some other frameworks as well, because the model size of dlib is nearly 100MB and it may not suit your needs i f you want to port it to mobile device. So the key is ** Facial Landmark Detection **, once you get the full face labelled, you can draw conclusions like if the mouth if open or the eyes are blinking, and it works in Real-time, so your video processing won't suffer much.

How to resize a BufferedImage in Java

I am looking for the simplest (and still non-problematic) way to resize a BufferedImage in Java.
In some answer to a question, the user coobird suggested the following solution, in his words (very slightly changed by me):
**
The Graphics object has a method to draw an Image while also performing a resize operation:
Graphics.drawImage(Image, int, int, int, int, ImageObserver)
method can be used to specify the location along with the size of the image when drawing.
So, we could use a piece of code like this:
BufferedImage originalImage = // .. created somehow
BufferedImage newImage = new BufferedImage(SMALL_SIZE, SMALL_SIZE, BufferedImage.TYPE_INT_RGB);
Graphics g = newImage.createGraphics();
g.drawImage(originalImage, 0, 0, SMALL_SIZE, SMALL_SIZE, null);
g.dispose();
This will take originalImage and draw it on the newImage with the width and height of SMALL_SIZE.
**
This solution seems rather simple. I have two questions about it:
Will it also work (using the exact same code), if I want to resize an image to a larger size, not only a smaller one?
Are there any problems with this solution?
If there is a better way to do this, please suggest it.
Thanks
The major problem with single step scaling is they don't generally produce quality output, as they focus on taking the original and squeezing into a smaller space, usually by dropping out a lot of pixel information (different algorithms do different things, so I'm generalizing)
Will drawGraphics scale up and down, yes, will it do it efficiently or produce a quality output? These will come down to implementation, generally speaking, most of the scaling algorithms used by default are focused on speed. You can effect these in a little way, but generally, unless you're scaling over a small range, the quality generally suffers (from my experience).
You can take a look at The Perils of Image.getScaledInstance() for more details and discussions on the topic.
Generally, what is generally recommend is to either use a dedicated library, like imgscalr, which, from the ten minutes I've played with it, does a pretty good job or perform a stepped scale.
A stepped scale basically steps the image up or down by the power of 2 until it reaches it's desired size. Remember, scaling up is nothing more then taking a pixel and enlarging it a little, so quality will always be an issue if you scale up to a very large size.
For example...
Quality of Image after resize very low -- Java
Scale the ImageIcon automatically to label size
Java: JPanel background not scaling
Remember, any scaling is generally an expensive operation (based on the original and target size of the image), so it is generally best to try and do those operations out side of the paint process and in the background where possible.
There is also the question whether you want to maintain the aspect ratio of the image? Based on you example, the image would be scaled in a square manner (stretched to meet to the requirements of the target size), this is generally not desired. You can pass -1 to either the width or height parameter and the underlying algorithm will maintain the aspect ratio of the original image or you could simply take control and make more determinations over whether you want to fill or fit the image to a target area, for example...
Java: maintaining aspect ratio of JPanel background image
In general, I avoid using drawImage or getScaledInstance most of the time (if your scaling only over a small range or want to do a low quality, fast scale, these can work) and rely more on things like fit/fill a target area and stepped scaling. The reason for using my own methods simply comes down to not always being allowed to use outside libraries. Nice not to have to re-invent the wheel where you can
It will enlarge the original if you set the parameters so. But: you should use some smart algorithm which preserves edges because simply enlarging an image will make it blurry and will result in worse perceived quality.
No problems. Theoretically this can even be hardware-accelerated on certain platforms.

Image caching and performance

I'm currently trying to improve the performances of a map rendering library. In the case of punctual symbols, the library is really often jsut drawing the same image again and again on each location. the drawing process may be really complex, though, because the parametrization of the symbol is really very rich. For each point, I have a tree structure that computes the image about to be drawn. When parameters are not dependant on the data I'm processing, as I said earlier, I just draw a complex symbol several times.
I've tried to implement a caching mechanism. I store the images that have already be drawn, and if I encounter a configuration that has already been met, I get the image and draw it again. The first test I've made is for a very simple symbol. It's a circle whose both shape and interior are filled.
As I know the symbol will be constant in all locations, I cache it and draw it again from the cached image then. That works... but I face two important problems :
The quality of the drawn symbols is hardly damaged.
More problematic : the time needed to render the map is reaally higher with caching than without caching. That's pretty disappointing for a cache ^_^
The core code when the caching mechanism is on is the following :
if(pc.isCached(map)){
BufferedImage bi = pc.getCachedValue(map);
drawCachedImageOnGeometry(g2, sds, fid, selected, mt, the_geom, bi);
} else {
BufferedImage bi = g2.getDeviceConfiguration().createCompatibleImage(200, 200);
Graphics2D tg2 = bi.createGraphics();
graphic.draw(tg2, map, selected, mt, AffineTransform.getTranslateInstance(100, 100));
drawCachedImageOnGeometry(g2, sds, fid, selected, mt, the_geom, bi);
pc.cacheSymbol(map, bi);
}
The only interesting call made in drawCachedImageOnGeometry is
g2.drawRenderedImage(bi, AffineTransform.getTranslateInstance(x-100,y-100));
I've made some attempts to use VolatileImage instances rather than BufferedImage... but that causes deeper problems (I've not been able to be sure that the image will be correctly rendered each time it is needed).
I've made some profiling too and it appears that when using my cache, the operations that take the longest time are the rendering operations made in awt.
That said, I guess my strategy is wrong... Consequently, my questions are :
Are there any efficient way to achieve the goal I've explained ?
More accurately, would it be faster to store the AWT instructions used to draw my symbols and to translate them as needed ? I make the assumption that it may be possible to retrieve the "commands" used to build the symbol... I didn't find many informations about that on the world wide web, though... If it is possible, that would save me both the computation time of the symbol (that can be really complex, as said earlier) and the quality of my symbols.
Thanks in advance for all the informations and resources you'll give me :-)
Agemen.
EDIT : Here are some details about the graphics that can be rendered. According to the symbology model I'm implementing, graphics can be really simple (ie a filled square with its shape) as well as really complex (A Label whose both shape and fill are drawn with hatches, for instance, and even if a halo around it if I want). I want to use a cache because I'm sure that in most configurations I'll be able to :
differenciate the parameters that have been used to draw two different symbols of the same source that are styled with the same style.
be sure that two sources with the same parameters (location excepted) will produce the same symbol for the same style, but at two different locations (only a translation will be needed).
Because of these two points, caching seems to be a good strategy. Moreover, there may be thousands of duplcated symbols to be drawn in the same image.
You are awefully vague about what kind of operations your drawing really entails, so all I can give you are some very general pointers.
1.) Drawing a pre-rendered Image is not necessarily faster than drawing the same Image using Graphics2D operations. It depends a lot on the complexity required to draw the image. As an extreme case consider fillRect() vs. a drawImage() of an Image containing the pre-rendered rectangle (fillRect just writes the destination pixels, where drawImage also needs to copy from a source).
2.) In most cases you never want to mess with VolatileImage directly. BufferedImage takes advantage of VolatileImage automatically unless you mess with the Image DataBuffer. If you have many pre-rendered images you may also run out of accelerated video memory and that degrades image drawing performance.
3.) On-the-fly scaling/rotating etc. of a pre-rendered image can be pretty costly (depending on the platform and current graphics transformations).
4.) The 'compatible Image' you create may not really be compatible with the drawing target. You obtain an image compatible with the default screen device, which may not be compatible with the actual target in a multi monitor setup. You may get better results using the actual target components createImage().
EDIT:
5.) Translating the coordinates of a rendering operation may alter the destination pixels produced. An obvious case is when the coordinates are non-integers (either in the coordinates themselves or indirectly through the AffineTransform set on the graphics). Also, antialiasing of text and possibly other primitives may be influenced slightly by coordinates (subpixel rendering comes to mind).
You could attempt an approach that differentiates on if a symbol is presumably fast or slow to render. The fast ones being rendered directly, while the slow ones are cached. The main problem here is in deciding which ones are fast/slow, I expect this to be non-trivial to decide.
Also, I wonder when you say there are thousands of symbols to be rendered, as I imagine most of them should be clipped away since only a small portion of the graph fits into a Window/Frame? If thats the case, don't bother much with caching. Drawing operations that are completely outside the current clip bounds will be relatively cheap - all the graphics target really does for them is detection if they are completely invisible and when they are just do nothing. If the goal is the produce an image to be saved to disk/printed (whatever) I wouldn't bother much with speeding up the rendering, since this is a relatively rare operation and the actual printing may by far exceed the time needed for rendering the graph anyway.
If none of the above applies to your case, be somewhat careful that your cache does not use more time/memory to decide if a cached version exists than it really saves in rendering time. You also need to take into accound that building a cached image instead of rendering to the target directly does cost you some time if that image is never reused. Caching can only gain you some speed if the image is reused at least once, preferably many more times.
If you build your symbols from primive operations by combining primitve rendering operation objects (like there is a Rectangle, Halo and Text rendering object subclass), you may want to assign each of them a cost indicator and only cache those symbols that exceed some (to be determined) cost threshhold. Also it may be a good idea to implement a hashCode() for each primitive operation and the symbol itself for fast(er) equals detection.

Appending to an Image File

I have written a program that takes a 'photo' and for every pixel it chooses to insert an image from a range of other photos. The image chosen is the photo of which the average colour is closest to the original pixel from the photograph.
I have done this by firstly averaging the rgb values from every pixel in 'stock' image and then converting it to CIE LAB so i could calculate the how 'close' it is to the pixel in question in terms of human perception of the colour.
I have then compiled an image where each pixel in the original 'photo' image has been replaced with the 'closest' stock image.
It works nicely and the effect is good however the stock image size is 300 by 300 pixels and even with the virtual machine flags of "-Xms2048m -Xmx2048m", which yes I know is ridiculus, on 555px by 540px image I can only replace the stock images scaled down to 50 px before I get an out of memory error.
So basically I am trying to think of solutions. Firstly I think the image effect itself may be improved by averaging every 4 pixels (2x2 square) of the original image into a single pixel and then replacing this pixel with the image, as this way the small photos will be more visible in the individual print. This should also allow me to draw the stock images at a greater size. Does anyone have any experience in this sort of image manipulation? If so what tricks have you discovered to produce a nice image.
Ultimately I think the way to reduce the memory errors would be to repeatedly save the image to disk and append the next line of images to the file whilst continually removing the old set of rendered images from memory. How can this be done? Is it similar to appending a normal file.
Any help in this last matter would be greatly appreciated.
Thanks,
Alex
I suggest looking into the Java Advanced Imaging (JAI) API. You're probably using BufferedImage right now, which does keep everything in memory: source images as well as output images. This is known as "immediate mode" processing. When you call a method to resize the image, it happens immediately. As a result, you're still keeping the stock images in memory.
With JAI, there are two benefits you can take advantage of.
Deferred mode processing.
Tile computation.
Deferred mode means that the output images are not computed right when you call methods on the images. Instead, a call to resize an image creates a small "operator" object that can do the resizing later. This lets you construct chains, trees, or pipelines of operations. So, your work would build a tree of operations like "crop, resize, composite" for each stock image. The nice part is that the operations are just command objects so you aren't consuming all the memory while you build up your commands.
This API is pull-based. It defers computation until some output action pulls pixels from the operators. This quickly helps save time and memory by avoiding needless pixel operations.
For example, suppose you need an output image that is 2048 x 2048 pixels, scaled up from a 512x512 crop out of a source image that's 1600x512 pixels. Obviously, it doesn't make sense to scale up the entire 1600x512 source image, just to throw away 2/3 of the pixels. Instead, the scaling operator will have a "region of interest" (ROI) based on it's output dimensions. The scaling operator projects the ROI onto the source image and only computes those pixels.
The commands must eventually get evaluated. This happens in a few situations, mostly relating to output of the final image. So, asking for a BufferedImage to display the output on the screen will force all the commands to evaluate. Similarly, writing the output image to disk will force evaluation.
In some cases, you can keep the second benefit of JAI, which is tile based rendering. Whereas BufferedImage does all its work right away, across all pixels, tile rendering just operates on rectangular sections of the image at a time.
Using the example from before, the 2048x2048 output image will get broken into tiles. Suppose these are 256x256, then the entire image gets broken into 64 tiles. The JAI operator objects know how to work a tile at a tile. So, scaling the 512x512 section of the source image really happens 64 times on 64x64 source pixels at a time.
Computing a tile at a time means looping across the tiles, which would seem to take more time. However, two things work in your favor when doing tile computation. First, tiles can be evaluated on multiple threads concurrently. Second, the transient memory usage is much, much lower than immediate mode computation.
All of which is a long-winded explanation for why you want to use JAI for this type of image processing.
A couple of notes and caveats:
You can defeat tile based rendering without realizing it. Anywhere you've got a BufferedImage in the workstream, it cannot act as a tile source or sink.
If you render to disk using the JAI or JAI Image I/O operators for JPEG, then you're in good shape. If you try to use the JDK's built-in image classes, you'll need all the memory. (Basically, avoid mixing the two types of image manipulation. Immediate mode and deferred mode don't mix well.)
All the fancy stuff with ROIs, tiles, and deferred mode are transparent to the program. You just make API call on the JAI class. You only deal with the machinery if you need more control over things like tile sizes, caching, and concurrency.
Here's a suggestion that might be useful;
Try segregating the two main tasks into individual programs. Your first task is to decide which images go where, and that can be a simple mapping from coordinates to filenames, which can be represented as lines of text:
0,0,image123.jpg
0,1,image542.jpg
.....
After that task is done (and it sounds like you have it well handled), then you can have a separate program handle the compilation.
This compilation could be done by appending to an image, but you probably don't want to mess around with file formats yourself. It's better to let your programming environment do it by using a Java Image object of some sort. The biggest one you can fit in memory pixelwise will be 2GB leading to sqrt(2x10^9) maximum height and width. From this number and dividing by the number of images you have for height and width, you will get the overall pixels per subimage allowed., and can paint them into the appropriate places.
Every time you 'append' are you perhaps implicitly creating a new object with one more pixel to replace the old one (ie, a parallel to the classic problem of repeatedly appending to a String instead of using a StringBuilder) ?
If you post the portion of your code that does the storing and appending, someone will probably help you find an efficient way of recoding it.

Categories

Resources