I have been spending some time looking into the hardware acceleration features of Java, and I am still a bit confused as none of the sites that I found online directly and clearly answered some of the questions I have. So here are the questions I have for hardware acceleration in Java:
1) In Eclipse version 3.6.0, with the most recent Java update for Mac OS X (1.6u10 I think), is hardware acceleration enabled by default? I read somewhere that
someCanvas.getGraphicsConfiguration().getBufferCapabilities().isPageFlipping()
is supposed to give an indication of whether or not hardware acceleration is enabled, and my program reports back true when that is run on my main Canvas instance for drawing to. If my hardware acceleration is not enabled now, or by default, what would I have to do to enable it?
2) I have seen a couple articles here and there about the difference between a BufferedImage and VolatileImage, mainly saying that VolatileImage is the hardware accelerated image and is stored in VRAM for fast copy-from operations. However, I have also found some instances where BufferedImage is said to be hardware accelerated as well. Is BufferedImage hardware accelerated as well in my environment? What would be the advantage of using a VolatileImage if both types are hardware accelerated? My main assumption for the advantage of having a VolatileImage in the case of both having acceleration is that VolatileImage is able to detect when its VRAM has been dumped. But if BufferedImage also support acceleration now, would it not have the same kind of detection built into it as well, just hidden from the user, in case that the memory is dumped?
3) Is there any advantage to using
someGraphicsConfiguration.getCompatibleImage/getCompatibleVolatileImage()
as opposed to
ImageIO.read()
In a tutorial I have been reading for some general concepts about setting up the rendering window properly (tutorial) it uses the getCompatibleImage method, which I believe returns a BufferedImage, to get their "hardware accelerated" images for fast drawing, which ties into question 2 about if it is hardware accelerated.
4) This is less hardware acceleration, but it is something I have been curious about: do I need to order which graphics get drawn? I know that when using OpenGL via C/C++ it is best to make sure that the same graphic is drawn in all the locations it needs to be drawn at once to reduce the number of times the current texture needs to be switch. From what I have read, it seems as if Java will take care of this for me and make sure things are drawn in the most optimal fashion, but again, nothing has ever said anything like this clearly.
5) What AWT/Swing classes support hardware acceleration, and which ones should be used? I am currently using a class that extends JFrame to create a window, and adding a Canvas to it from which I create a BufferStrategy. Is this good practice, or is there some other type of way I should be implementing this?
Thank you very much for your time, and I hope I provided clear questions and enough information for you to answer my several questions.
1)
So far hardware acceleration is never enabled by default, and to my knowledge it has not changed yet. To activate rendering acceleration pass this arg (-Dsun.java2d.opengl=true) to the Java launcher at program start up, or set it before using any rendering libraries. System.setProperty("sun.java2d.opengl", "true"); It is an optional parameter.
2)
Yes BufferedImage encapsulates some of the details of managing the Volatile Memory because, when the BufferdImage is accelerated a copy of it is stored in V-Ram as a VolatileImage.
The upside to a BufferedImage is as long as you are not messing with the pixels it contains, just copying them like a call to graphics.drawImage(), then the BufferedImage will be accelerated after a certain non specified number of copies and it will manage the VolatileImage for you.
The downside to a BufferedImage is if you are doing image editing, changing the pixels in the BufferedImage, in some cases it will give up trying to accelerate it, at that point if you are looking for performant rendering for your editing you need to consider managing your own VolatileImage. I do not know which operations make the BufferedImage give up on trying to accelerate rendering for you.
3)
The advantage of using the createCompatibleImage()/createCompatibleVolatileImage()
is that ImageIO.read() does not do any conversion to a default supported Image Data Model.
So if you import a PNG it will represent it in the format built by the PNG reader. This means that every time it is rendered by a GraphicsDevice it must first be converted to a compatible Image Data Model.
BufferedImage image = ImageIO.read ( url );
BufferedImage convertedImage = null;
GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment ();
GraphicsDevice gd = ge.getDefaultScreenDevice ();
GraphicsConfiguration gc = gd.getDefaultConfiguration ();
convertedImage = gc.createCompatibleImage (image.getWidth (),
image.getHeight (),
image.getTransparency () );
Graphics2D g2d = convertedImage.createGraphics ();
g2d.drawImage ( image, 0, 0, image.getWidth (), image.getHeight (), null );
g2d.dispose()
The above process will convert an image read in with the image io api to a BufferedImage that has a Image Data Model compatible with the default screen device so that conversion does not need to take place when ever it is rendered. The times when this is most advantageous is when you will be rendering the image very frequently.
4)
You do not need to make an effort to batch your image rendering because for the most part Java will attempt to do this for you. There is no reason why you cant attempt to do this but in general it is better to profile your applications and confirm that there is a bottleneck at the image rendering code before you attempt to carry out a performance optimization such as this. The main disadvantage is that it my be implemented slightly differently in each JVM and then the enhancements might be worthless.
5)
To the best of my knowledge the design you have outlined is one of the better strategies out there when doing Double Buffering manually and actively rendering an application.
http://docs.oracle.com/javase/7/docs/api/java/awt/image/BufferStrategy.html
At this link you will find a description of the BufferStrategy. In the description it shows a code snippet that is the recommended way to do active rendering with a BufferStrategy object. I use this particular technique for my active rendering code. The only major difference is that in my code. like you, I have created the BufferStrategy on an instance of a Canvas which I put on a JFrame.
Judging from some older documentation, you can tell on Sun JVMs whether hardware acceleration is on or not by checking the sun.java2d.opengl property.
Unfortunately, I do not know if this applies to the Apple JVM.
You can check if an individual image is hardware accelerated using Image's getCapabilities(GraphicsConfiguration).isAccelerated()
Having said all this, all the documentation I've seen (including this one) imply that BufferedImage is not hardware accelerated. Swing has also been changed to use VolatileImages for its double-buffering for this very reason.
Related
I am looking for the simplest (and still non-problematic) way to resize a BufferedImage in Java.
In some answer to a question, the user coobird suggested the following solution, in his words (very slightly changed by me):
**
The Graphics object has a method to draw an Image while also performing a resize operation:
Graphics.drawImage(Image, int, int, int, int, ImageObserver)
method can be used to specify the location along with the size of the image when drawing.
So, we could use a piece of code like this:
BufferedImage originalImage = // .. created somehow
BufferedImage newImage = new BufferedImage(SMALL_SIZE, SMALL_SIZE, BufferedImage.TYPE_INT_RGB);
Graphics g = newImage.createGraphics();
g.drawImage(originalImage, 0, 0, SMALL_SIZE, SMALL_SIZE, null);
g.dispose();
This will take originalImage and draw it on the newImage with the width and height of SMALL_SIZE.
**
This solution seems rather simple. I have two questions about it:
Will it also work (using the exact same code), if I want to resize an image to a larger size, not only a smaller one?
Are there any problems with this solution?
If there is a better way to do this, please suggest it.
Thanks
The major problem with single step scaling is they don't generally produce quality output, as they focus on taking the original and squeezing into a smaller space, usually by dropping out a lot of pixel information (different algorithms do different things, so I'm generalizing)
Will drawGraphics scale up and down, yes, will it do it efficiently or produce a quality output? These will come down to implementation, generally speaking, most of the scaling algorithms used by default are focused on speed. You can effect these in a little way, but generally, unless you're scaling over a small range, the quality generally suffers (from my experience).
You can take a look at The Perils of Image.getScaledInstance() for more details and discussions on the topic.
Generally, what is generally recommend is to either use a dedicated library, like imgscalr, which, from the ten minutes I've played with it, does a pretty good job or perform a stepped scale.
A stepped scale basically steps the image up or down by the power of 2 until it reaches it's desired size. Remember, scaling up is nothing more then taking a pixel and enlarging it a little, so quality will always be an issue if you scale up to a very large size.
For example...
Quality of Image after resize very low -- Java
Scale the ImageIcon automatically to label size
Java: JPanel background not scaling
Remember, any scaling is generally an expensive operation (based on the original and target size of the image), so it is generally best to try and do those operations out side of the paint process and in the background where possible.
There is also the question whether you want to maintain the aspect ratio of the image? Based on you example, the image would be scaled in a square manner (stretched to meet to the requirements of the target size), this is generally not desired. You can pass -1 to either the width or height parameter and the underlying algorithm will maintain the aspect ratio of the original image or you could simply take control and make more determinations over whether you want to fill or fit the image to a target area, for example...
Java: maintaining aspect ratio of JPanel background image
In general, I avoid using drawImage or getScaledInstance most of the time (if your scaling only over a small range or want to do a low quality, fast scale, these can work) and rely more on things like fit/fill a target area and stepped scaling. The reason for using my own methods simply comes down to not always being allowed to use outside libraries. Nice not to have to re-invent the wheel where you can
It will enlarge the original if you set the parameters so. But: you should use some smart algorithm which preserves edges because simply enlarging an image will make it blurry and will result in worse perceived quality.
No problems. Theoretically this can even be hardware-accelerated on certain platforms.
I have spent a bit of time researching about whether it is possible to draw on top of a VLCJ movie within a Java application. I have found a few bits of conflicting advice some saying it is not possible and some referencing articles which have moved on oracle.com.
Can someone clarify if it is or is not possible to draw java2d graphics like rectangles/lines which also have transparent backgrounds so the video stream underneath can be viewed whilst the shapes are present on screen?
If this is not possible with vlcj what would be a good alternative for a linux and windows compatible media player allowing for annotation over a playing video stream? Please note i do not have to be limited to java but something where i can get re-use out of developed drawing routines for multiple platforms would be ideal.
Yes, you can do it. For the normal hardware rendered video player, you need to have at least Java 6u10 (preferably 7) and achieve this by overlaying a transparent JWindow on top of the VLC canvas (it's not too hard to add events to the canvas to check for updates in position / size and then move the overlayed window correspondingly.)
The other way that doesn't involve using overlaid windows is to use a DirectMediaPlayer, where you have access to the framebuffer directly (and can therefore do what you like with the pixels, including wrapping them as textures round 3D objects and so on.) So with this approach, you could simply draw what you wanted onto the frame buffer before rendering it to screen in the way you chose. This is the most flexible approach, but comes with the downside that if you're not very careful about your implementation, you lose all the GPU acceleration and end up crippling the CPU, especially for HD video.
If a simple overlay would do the trick, I'd try that first, and just resort to a DirectMediaPlayer if you have to.
I'm currently trying to improve the performances of a map rendering library. In the case of punctual symbols, the library is really often jsut drawing the same image again and again on each location. the drawing process may be really complex, though, because the parametrization of the symbol is really very rich. For each point, I have a tree structure that computes the image about to be drawn. When parameters are not dependant on the data I'm processing, as I said earlier, I just draw a complex symbol several times.
I've tried to implement a caching mechanism. I store the images that have already be drawn, and if I encounter a configuration that has already been met, I get the image and draw it again. The first test I've made is for a very simple symbol. It's a circle whose both shape and interior are filled.
As I know the symbol will be constant in all locations, I cache it and draw it again from the cached image then. That works... but I face two important problems :
The quality of the drawn symbols is hardly damaged.
More problematic : the time needed to render the map is reaally higher with caching than without caching. That's pretty disappointing for a cache ^_^
The core code when the caching mechanism is on is the following :
if(pc.isCached(map)){
BufferedImage bi = pc.getCachedValue(map);
drawCachedImageOnGeometry(g2, sds, fid, selected, mt, the_geom, bi);
} else {
BufferedImage bi = g2.getDeviceConfiguration().createCompatibleImage(200, 200);
Graphics2D tg2 = bi.createGraphics();
graphic.draw(tg2, map, selected, mt, AffineTransform.getTranslateInstance(100, 100));
drawCachedImageOnGeometry(g2, sds, fid, selected, mt, the_geom, bi);
pc.cacheSymbol(map, bi);
}
The only interesting call made in drawCachedImageOnGeometry is
g2.drawRenderedImage(bi, AffineTransform.getTranslateInstance(x-100,y-100));
I've made some attempts to use VolatileImage instances rather than BufferedImage... but that causes deeper problems (I've not been able to be sure that the image will be correctly rendered each time it is needed).
I've made some profiling too and it appears that when using my cache, the operations that take the longest time are the rendering operations made in awt.
That said, I guess my strategy is wrong... Consequently, my questions are :
Are there any efficient way to achieve the goal I've explained ?
More accurately, would it be faster to store the AWT instructions used to draw my symbols and to translate them as needed ? I make the assumption that it may be possible to retrieve the "commands" used to build the symbol... I didn't find many informations about that on the world wide web, though... If it is possible, that would save me both the computation time of the symbol (that can be really complex, as said earlier) and the quality of my symbols.
Thanks in advance for all the informations and resources you'll give me :-)
Agemen.
EDIT : Here are some details about the graphics that can be rendered. According to the symbology model I'm implementing, graphics can be really simple (ie a filled square with its shape) as well as really complex (A Label whose both shape and fill are drawn with hatches, for instance, and even if a halo around it if I want). I want to use a cache because I'm sure that in most configurations I'll be able to :
differenciate the parameters that have been used to draw two different symbols of the same source that are styled with the same style.
be sure that two sources with the same parameters (location excepted) will produce the same symbol for the same style, but at two different locations (only a translation will be needed).
Because of these two points, caching seems to be a good strategy. Moreover, there may be thousands of duplcated symbols to be drawn in the same image.
You are awefully vague about what kind of operations your drawing really entails, so all I can give you are some very general pointers.
1.) Drawing a pre-rendered Image is not necessarily faster than drawing the same Image using Graphics2D operations. It depends a lot on the complexity required to draw the image. As an extreme case consider fillRect() vs. a drawImage() of an Image containing the pre-rendered rectangle (fillRect just writes the destination pixels, where drawImage also needs to copy from a source).
2.) In most cases you never want to mess with VolatileImage directly. BufferedImage takes advantage of VolatileImage automatically unless you mess with the Image DataBuffer. If you have many pre-rendered images you may also run out of accelerated video memory and that degrades image drawing performance.
3.) On-the-fly scaling/rotating etc. of a pre-rendered image can be pretty costly (depending on the platform and current graphics transformations).
4.) The 'compatible Image' you create may not really be compatible with the drawing target. You obtain an image compatible with the default screen device, which may not be compatible with the actual target in a multi monitor setup. You may get better results using the actual target components createImage().
EDIT:
5.) Translating the coordinates of a rendering operation may alter the destination pixels produced. An obvious case is when the coordinates are non-integers (either in the coordinates themselves or indirectly through the AffineTransform set on the graphics). Also, antialiasing of text and possibly other primitives may be influenced slightly by coordinates (subpixel rendering comes to mind).
You could attempt an approach that differentiates on if a symbol is presumably fast or slow to render. The fast ones being rendered directly, while the slow ones are cached. The main problem here is in deciding which ones are fast/slow, I expect this to be non-trivial to decide.
Also, I wonder when you say there are thousands of symbols to be rendered, as I imagine most of them should be clipped away since only a small portion of the graph fits into a Window/Frame? If thats the case, don't bother much with caching. Drawing operations that are completely outside the current clip bounds will be relatively cheap - all the graphics target really does for them is detection if they are completely invisible and when they are just do nothing. If the goal is the produce an image to be saved to disk/printed (whatever) I wouldn't bother much with speeding up the rendering, since this is a relatively rare operation and the actual printing may by far exceed the time needed for rendering the graph anyway.
If none of the above applies to your case, be somewhat careful that your cache does not use more time/memory to decide if a cached version exists than it really saves in rendering time. You also need to take into accound that building a cached image instead of rendering to the target directly does cost you some time if that image is never reused. Caching can only gain you some speed if the image is reused at least once, preferably many more times.
If you build your symbols from primive operations by combining primitve rendering operation objects (like there is a Rectangle, Halo and Text rendering object subclass), you may want to assign each of them a cost indicator and only cache those symbols that exceed some (to be determined) cost threshhold. Also it may be a good idea to implement a hashCode() for each primitive operation and the symbol itself for fast(er) equals detection.
I have to scale an image with Java JAI. At the time now, I use the following code:
private static RenderedOp scale(RenderedOp image, float scale) {
ParameterBlock scaleParams = new ParameterBlock();
scaleParams.addSource(image);
scaleParams.add(scale).add(scale).add(0.0f).add(0.0f);
scaleParams.add(Interpolation.getInstance(Interpolation.INTERP_BICUBIC_2));
// Quality related hints when scaling the image
RenderingHints scalingHints = new RenderingHints(RenderingHints.KEY_RENDERING, RenderingHints.VALUE_RENDER_QUALITY);
scalingHints.put(RenderingHints.KEY_ALPHA_INTERPOLATION, RenderingHints.VALUE_ALPHA_INTERPOLATION_QUALITY);
scalingHints.put(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
scalingHints.put(RenderingHints.KEY_COLOR_RENDERING, RenderingHints.VALUE_COLOR_RENDER_QUALITY);
scalingHints.put(RenderingHints.KEY_DITHERING, RenderingHints.VALUE_DITHER_ENABLE);
scalingHints.put(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BICUBIC);
scalingHints.put(JAI.KEY_BORDER_EXTENDER, BorderExtender.createInstance(BorderExtender.BORDER_COPY));
return JAI.create("scale", scaleParams, scalingHints);
}
Unfortunately, this leads to very bad results, especially because I often have to scale images with a scale factor less than 0.5...
Any advice?
I am guessing you are trying to scale a larger image down to a thumbnail size or some equally large difference from original to scaled image?
If-so, this topic was actually address by Chris Campbell from the Java2D back in 2007 (I am not expecting you knew this, just pointing out that it's a common question) because the new Java2D scaling approaches (RenderingHints.VALUE_INTERPOLATION_*) did not provide an equivalent to the then-deprecated Image.getScaledInstance(SCALE_AREA_AVERAGING or SCALE_SMOOTH) approach.
As it turns out, the SCALE_AREA_AVERAGING or SCALE_SMOOTH approaches in the old Java 1.1 Image.getScaledInstance approach was a fairly expensive, multi-step operation that was doing a lot of work in the background to generate that nice-looking image.
Chris pointed out that the new and "correct" way, using Java2D, to get this same result is an incremental process of scaling the image in-half over and over until the desired image size is reached, preferably using a higher-quality scale like RenderHints.VALUE_INTERPOLATION_BICUBIC or BILINEAR.
The result comes out almost identical to the original Image.getScaledInstance approach that folks want.
I actually went hunting for this answer a few months ago while writing an image-hosting service and was surprised at how complicated the simple question of "How do I make a nice looking thumbnail in Java?" became.
I eventually create a small Java library (Apache 2, open sourced) that implements 3 different approaches to image scaling in Java using "best practices" including the incremental approach that Chris suggested.
The library is called imgscalr. You can download and use it as simple as:
BufferedImage thumbnail = Scalr.resize(srcImage, 150);
There are more options to set and use (e.g. the Quality or Speed of the scaling) but out of the box there are intelligent defaults for everything to make a nice looking scaled image for you so you don't have to worry about more if you don't want to. The library also makes a strong effort to dispose of and avoid any Object allocation that isn't absolutely necessary and dispose of BufferedImage instances immediately if not needed -- it is intended to be code as part of a long-running server app so this was critical.
I've made a few releases of the library already, and if you'd rather just rip out the "good stuff" and do something with it yourself, go for it. It is all on GitHub and none of it is top secret, just an attempt to make people's lives easier.
Hope that helps.
I've obtained good results with Thumbnailator. I don't know what JAI.KEY_BORDER_EXTENDER is supposed to do, but I guess the rest of the functionality (antialiasing, dithering, etc) is supported.
I used it to generate grayscale thumbnails of quite big black and white images
If you are lucky and you have only Black & White images, then you can use very fast and good quality operation: SubsampleBinaryToGray
I'm in the process of writing a custom heatmap generator. I'm wondering what the fastest way is to draw boxes (up to around 1 million) in Java. Most questions I've found have concentrated on dynamic images (like in games), and I'm wondering if there's a better way to go for static images. I've tried using swing (via a GridLayout and adding a colored canvas to each box), drawing directly on the panel with Graphics2D, and also by using the Processing libraries. While Processing is pretty fast and generates a clean image, the window seems to have problems keeping it; it generates different parts of the image whenever you minimize, move the windows, etc.
I've heard of OpenGL, but I've never touched it, and I wanted some feedback as to whether that (or something else) would be a better approach before investing time in it.
For static images, I paint them to a BufferedImage (BI) and then draw that via Graphics2D.
I keep a boolean that tells me whether the BI is up to date. That way I only incur the expensive painting cost once. If you want to get fancy, you can scale the BI to handle minor resizing. For a major resizing you'll probably want to repaint the BI so as not to introduce artifacts. It's also useful for overlaying data (such as cross hairs, the value under the cursor, etc) as you're only painting the BI and the data.