How can I effectively find a Graphics2D rendering error? - java

Can you suggest a method of identifying the source of a rendering error from a debugger?
Misko Hevery classifies bugs into three categories:
Logical
Wiring
Rendering
Its clear to me that my problem is a rendering bug.
I have a Swing application with a Panel that contains multiple layers. Rendering all the layers can take a significant amount of time so the application uses a thread pool to render layers and tiles from layers into BufferedImages. When it comes time for the Event Dispatch Thread to render the panel the most-recently rendered BufferedImages are drawn to the screen.
This setup has performed adequately.
A new feature requires that a certain layer type support transparency. Something, somewhere isn't preserving the transparency. The error could be in a number of places, possibly in the implementation of the objects to be rendered, the error could possibly be in the offline rendering thread implementation. Its possible that the many BufferedImages aren't combined together correctly in the EDT rendering code.
I'm not asking anyone to look at the code and tell me where the error is.
What I want to know is what techniques people have found particularly effective in troubleshooting a Graphics2D rendering issue.
I'm a big supporter of unit tests but I'd prefer to start with another technique.
Is there a method or trick to visually inspect a BufferedImage or Graphics2D object from the debugger?
In the Netbeans Variables and Watches windows Netbeans sometimes uses PropertyEditors to display variable values. In this example image the value of foregroundColor and backgroundColor are shown as small swatches of the Color's value.
Is there an easy way to add/enable a Netbeans PropertyEditor which would display the contents of a BufferedImage?
I could temporarily sprinkle the code with method calls to write the various BufferedImages encountered to the disk such that they could then be inspected offline. It might work but it would be tediious to match the file on disk to the source code.
What would you do?

You might compare your approach to the one shown here with regard to clearing the buffer:
g2d.setComposite(AlphaComposite.Clear);
g2d.fillRect(0, 0, w, h);
In the worst case, you can break at a point in which your image is accessible and set a watch on the expression image.getRGB(0,0) with the display set to hexadecimal. The high order byte is the alpha value: FF is opaque, and 00..FE represents varying transparency.

Related

OpenGL work with non-OpenGL drawings?

Can graphics rendered using OpenGL work with graphics rendered not using OpenGL?
I am starting to learn OpenGL, but I am still shy when it comes to actually coding everything in OpenGL, I feel more comfortable drawing them out with JPanel or Canvas. I'm assuming that it wouldn't cause much issue code wise, but displaying it all at the same time could cause issues? Or am I stuck with one or the other?
Integrating OpenGL graphics with another non-OpenGL image or rendering boils down to compositing images. You can take a 2D image and load it as a texture in OpenGL, such that you can then use that texture to paint a surface in OpenGL, or as is suggested by your question, paint a background. Alternatively, you can use framebuffers in OpenGL to render an OpenGL scene to a texture, when can then be converted to a 2D bitmap and combined with another image.
There are limitations to this approach of course. Once an OpenGL scene has been moved to a 2D image, generally you lose all depth (it's possible to preserve depth in an additional channel in the image if you want to do that, but it would involve additional work).
In addition, since presumably you want one image to not simply overwrite the other, you're going to have to include an alpha (transparency) channel in one of your images, so that when you combine them, areas which haven't been drawn will end up showing the underlying image.
However, I would suggest you undertake the effort to simply find one rendering API that serves all your needs. The extra work you do to combine rendering output from two APIs is probably going to be wasted effort in the long run. It's one thing to embed an OpenGL control into an enclosing application that renders many of it's controls using a more conventional API like AWT. On the other hand, it's highly unusual to try to composite output from both OpenGL and another rendering API into the same output area.
Perhaps if you could provide a more concrete example of what kinds of rendering you're talking about, people could offer more helpful advice.
You're stuck with one or the other. You can't put them together.

I get weird black areas in a BufferedImage, how can I fix this?

I am currently writing a Fractal Explorer program, and I am encountering a really weird issue with it: I am drawing the fractal on a BufferedImage, and I get random black areas in that image. Screenshots: http://imgur.com/a/WalM7
The image is calculated multi-threaded: The big image is split into four (because I have a four-core processor) sub-images that are calculated individually. The black areas appear at the beginning of each of the sub-images. They are always rectangular, not necessarily following the order in which the pixels are calculated (left-to-right, but the area does not always stretch to the far side of the sub-image).
I have verified that immediately after the pixel is drawn (with Graphics.drawLine), BufferedImage.getRGB returns the right color for the pixel, but after the calculation is finished, it can return black instead, as the pixel is drawn on the screen.
The problem seems to vanish if I disable multi-threaded calculating (by assigning only one core to javaw.exe via the task manager) but I really don't want to have to abandon multi-core calculation. Has anyone else encountered this problem (I have not found anything via Google and stackoverflow), and do you know how to fix it?
The Graphics.drawLine call is synchronized on the Graphics object; if I additionally synchronized it on the BufferedImage, nothing changes.
If you want to see the bug for yourself, you can download the program at http://code.lucaswerkmeister.de/jfractalizer/. It is also available on GitHub (https://github.com/lucaswerkmeister/JFractalizer), but I only switched to GitHub recently, and in the first GitHub commit the problem is already apparent.
I think the problem is that neither BufferedImage nor Graphics is thread safe and that you see stale values in the thread that reads the BufferedImage after the computation.
Synchronizing on the BufferedImage like you said should actually help. But note that you must synchronize all accesses from all threads, including the read-only accesses. So my guess is that the thread that draws the BufferedImage on some component (which should be the AWT thread) does so without synchronization and therefore sees stale values.
However, I would suggest that instead of sharing a BufferedImage among multiple threads, you give each thread a separate image on which it can draw. Then, after all threads are finished, combine their work on a new image in the AWT thread.
Also, I suggest you use an ExecutorService for that if you don't do so already. It has the advantage that the visibility issues of the return values of the Callable tasks (in your case the image parts of the worker threads) are handled by the library classes.
If you combine these two approaches, you will not need to do any manual synchronization, which is always a good thing (as it's easy to get wrong).
Buffered images may not be thread safe because their data may live on the graphics card. However, this can be overridden. By using the ((DataBufferInt) image.getRaster().getDataBuffer()).getData() secret technique for high speed full image drawing (data buffer type depends on the image type you chose), you will get an unaccelerated image. As long as you never write to the same pixel twice, this should be theoretically completely safe. But remember to join() your threads somehow before reading pixels out of the image. join() method from thread not actually recommended as this requires thread death.
Related note:
The flicker you are witnessing is probably an artifact of the way awt renders to the screen. It runs in immediate mode, meaning every draw action you take immediately updates the screen. This slows down rendering of multiple objects directly to the window. You can get around the flicker by implementing a double buffering strategy. I like to draw to an intermediate image and then draw only that image to the screen.

Image caching and performance

I'm currently trying to improve the performances of a map rendering library. In the case of punctual symbols, the library is really often jsut drawing the same image again and again on each location. the drawing process may be really complex, though, because the parametrization of the symbol is really very rich. For each point, I have a tree structure that computes the image about to be drawn. When parameters are not dependant on the data I'm processing, as I said earlier, I just draw a complex symbol several times.
I've tried to implement a caching mechanism. I store the images that have already be drawn, and if I encounter a configuration that has already been met, I get the image and draw it again. The first test I've made is for a very simple symbol. It's a circle whose both shape and interior are filled.
As I know the symbol will be constant in all locations, I cache it and draw it again from the cached image then. That works... but I face two important problems :
The quality of the drawn symbols is hardly damaged.
More problematic : the time needed to render the map is reaally higher with caching than without caching. That's pretty disappointing for a cache ^_^
The core code when the caching mechanism is on is the following :
if(pc.isCached(map)){
BufferedImage bi = pc.getCachedValue(map);
drawCachedImageOnGeometry(g2, sds, fid, selected, mt, the_geom, bi);
} else {
BufferedImage bi = g2.getDeviceConfiguration().createCompatibleImage(200, 200);
Graphics2D tg2 = bi.createGraphics();
graphic.draw(tg2, map, selected, mt, AffineTransform.getTranslateInstance(100, 100));
drawCachedImageOnGeometry(g2, sds, fid, selected, mt, the_geom, bi);
pc.cacheSymbol(map, bi);
}
The only interesting call made in drawCachedImageOnGeometry is
g2.drawRenderedImage(bi, AffineTransform.getTranslateInstance(x-100,y-100));
I've made some attempts to use VolatileImage instances rather than BufferedImage... but that causes deeper problems (I've not been able to be sure that the image will be correctly rendered each time it is needed).
I've made some profiling too and it appears that when using my cache, the operations that take the longest time are the rendering operations made in awt.
That said, I guess my strategy is wrong... Consequently, my questions are :
Are there any efficient way to achieve the goal I've explained ?
More accurately, would it be faster to store the AWT instructions used to draw my symbols and to translate them as needed ? I make the assumption that it may be possible to retrieve the "commands" used to build the symbol... I didn't find many informations about that on the world wide web, though... If it is possible, that would save me both the computation time of the symbol (that can be really complex, as said earlier) and the quality of my symbols.
Thanks in advance for all the informations and resources you'll give me :-)
Agemen.
EDIT : Here are some details about the graphics that can be rendered. According to the symbology model I'm implementing, graphics can be really simple (ie a filled square with its shape) as well as really complex (A Label whose both shape and fill are drawn with hatches, for instance, and even if a halo around it if I want). I want to use a cache because I'm sure that in most configurations I'll be able to :
differenciate the parameters that have been used to draw two different symbols of the same source that are styled with the same style.
be sure that two sources with the same parameters (location excepted) will produce the same symbol for the same style, but at two different locations (only a translation will be needed).
Because of these two points, caching seems to be a good strategy. Moreover, there may be thousands of duplcated symbols to be drawn in the same image.
You are awefully vague about what kind of operations your drawing really entails, so all I can give you are some very general pointers.
1.) Drawing a pre-rendered Image is not necessarily faster than drawing the same Image using Graphics2D operations. It depends a lot on the complexity required to draw the image. As an extreme case consider fillRect() vs. a drawImage() of an Image containing the pre-rendered rectangle (fillRect just writes the destination pixels, where drawImage also needs to copy from a source).
2.) In most cases you never want to mess with VolatileImage directly. BufferedImage takes advantage of VolatileImage automatically unless you mess with the Image DataBuffer. If you have many pre-rendered images you may also run out of accelerated video memory and that degrades image drawing performance.
3.) On-the-fly scaling/rotating etc. of a pre-rendered image can be pretty costly (depending on the platform and current graphics transformations).
4.) The 'compatible Image' you create may not really be compatible with the drawing target. You obtain an image compatible with the default screen device, which may not be compatible with the actual target in a multi monitor setup. You may get better results using the actual target components createImage().
EDIT:
5.) Translating the coordinates of a rendering operation may alter the destination pixels produced. An obvious case is when the coordinates are non-integers (either in the coordinates themselves or indirectly through the AffineTransform set on the graphics). Also, antialiasing of text and possibly other primitives may be influenced slightly by coordinates (subpixel rendering comes to mind).
You could attempt an approach that differentiates on if a symbol is presumably fast or slow to render. The fast ones being rendered directly, while the slow ones are cached. The main problem here is in deciding which ones are fast/slow, I expect this to be non-trivial to decide.
Also, I wonder when you say there are thousands of symbols to be rendered, as I imagine most of them should be clipped away since only a small portion of the graph fits into a Window/Frame? If thats the case, don't bother much with caching. Drawing operations that are completely outside the current clip bounds will be relatively cheap - all the graphics target really does for them is detection if they are completely invisible and when they are just do nothing. If the goal is the produce an image to be saved to disk/printed (whatever) I wouldn't bother much with speeding up the rendering, since this is a relatively rare operation and the actual printing may by far exceed the time needed for rendering the graph anyway.
If none of the above applies to your case, be somewhat careful that your cache does not use more time/memory to decide if a cached version exists than it really saves in rendering time. You also need to take into accound that building a cached image instead of rendering to the target directly does cost you some time if that image is never reused. Caching can only gain you some speed if the image is reused at least once, preferably many more times.
If you build your symbols from primive operations by combining primitve rendering operation objects (like there is a Rectangle, Halo and Text rendering object subclass), you may want to assign each of them a cost indicator and only cache those symbols that exceed some (to be determined) cost threshhold. Also it may be a good idea to implement a hashCode() for each primitive operation and the symbol itself for fast(er) equals detection.

Printing in Java - Printable.print() resizes images

I have a custom report which draws via Graphics2D, and uses a lot of tiny BufferedImage sprites. PrinterJob.print() seems to be calling Printable.print() roughly once for each sprite (the actual count can vary both ways), so some pages are re-rendered 150 times... This causes printing to be unacceptably slow, about 10 seconds for two pages.
I found this: Why does the java Printable's print method get called multiple times with the same page number?
But it doesn't appear to explain my particular problem (or only partially explains it). I created a test report which has only a few sprites, and there was a small number of resizes that went up and down as I added and removed images on either the vertical or horizontal axes.
When printing to a PDF using Bullzip, I noticed that after zooming in on the images, they are being scaled up using a bilinear or bicubic algorithm. One of these images, which is unique in having an indexed color palette, does not appear to be scaled. I confirmed that the scaling is a Java behavior and not being performed by Bullzip by printing to a real printer and observing the same images being scaled versus not.
So it strikes me as the print API trying to rescale images to whatever DPI it has in mind, but for some reason it's calling Printable.print() each time it encounters an image that it deems as needing this treatment.
How do I fix this behavior? I tried setting rendering hints on the Graphics2D that I get when Printable.print() is called, to no avail. I don't know what else to do short of try to find and examine the print API's source code.
I think I just figured it out by accident. A report I just modified now draws an image over some geometry, and I noticed that the part of the geometry that's behind the box of the image is being rasterized and looks blurry compared to outside of the box. The image in question (and all other than the one indexed color image) has an 8 bit alpha channel.
I noticed before that Java's print rasterizer doesn't like things with translucency (one report which used it was being completely rasterized at I think 300dpi...), but I forgot that these images also had alpha channels.
When I get a chance, I'm probably going to fix this by further increasing the images' resolution and using 1 bit alpha. When scaled down for screen viewing, it will have a few bits of alpha again and look okay.

Appending to an Image File

I have written a program that takes a 'photo' and for every pixel it chooses to insert an image from a range of other photos. The image chosen is the photo of which the average colour is closest to the original pixel from the photograph.
I have done this by firstly averaging the rgb values from every pixel in 'stock' image and then converting it to CIE LAB so i could calculate the how 'close' it is to the pixel in question in terms of human perception of the colour.
I have then compiled an image where each pixel in the original 'photo' image has been replaced with the 'closest' stock image.
It works nicely and the effect is good however the stock image size is 300 by 300 pixels and even with the virtual machine flags of "-Xms2048m -Xmx2048m", which yes I know is ridiculus, on 555px by 540px image I can only replace the stock images scaled down to 50 px before I get an out of memory error.
So basically I am trying to think of solutions. Firstly I think the image effect itself may be improved by averaging every 4 pixels (2x2 square) of the original image into a single pixel and then replacing this pixel with the image, as this way the small photos will be more visible in the individual print. This should also allow me to draw the stock images at a greater size. Does anyone have any experience in this sort of image manipulation? If so what tricks have you discovered to produce a nice image.
Ultimately I think the way to reduce the memory errors would be to repeatedly save the image to disk and append the next line of images to the file whilst continually removing the old set of rendered images from memory. How can this be done? Is it similar to appending a normal file.
Any help in this last matter would be greatly appreciated.
Thanks,
Alex
I suggest looking into the Java Advanced Imaging (JAI) API. You're probably using BufferedImage right now, which does keep everything in memory: source images as well as output images. This is known as "immediate mode" processing. When you call a method to resize the image, it happens immediately. As a result, you're still keeping the stock images in memory.
With JAI, there are two benefits you can take advantage of.
Deferred mode processing.
Tile computation.
Deferred mode means that the output images are not computed right when you call methods on the images. Instead, a call to resize an image creates a small "operator" object that can do the resizing later. This lets you construct chains, trees, or pipelines of operations. So, your work would build a tree of operations like "crop, resize, composite" for each stock image. The nice part is that the operations are just command objects so you aren't consuming all the memory while you build up your commands.
This API is pull-based. It defers computation until some output action pulls pixels from the operators. This quickly helps save time and memory by avoiding needless pixel operations.
For example, suppose you need an output image that is 2048 x 2048 pixels, scaled up from a 512x512 crop out of a source image that's 1600x512 pixels. Obviously, it doesn't make sense to scale up the entire 1600x512 source image, just to throw away 2/3 of the pixels. Instead, the scaling operator will have a "region of interest" (ROI) based on it's output dimensions. The scaling operator projects the ROI onto the source image and only computes those pixels.
The commands must eventually get evaluated. This happens in a few situations, mostly relating to output of the final image. So, asking for a BufferedImage to display the output on the screen will force all the commands to evaluate. Similarly, writing the output image to disk will force evaluation.
In some cases, you can keep the second benefit of JAI, which is tile based rendering. Whereas BufferedImage does all its work right away, across all pixels, tile rendering just operates on rectangular sections of the image at a time.
Using the example from before, the 2048x2048 output image will get broken into tiles. Suppose these are 256x256, then the entire image gets broken into 64 tiles. The JAI operator objects know how to work a tile at a tile. So, scaling the 512x512 section of the source image really happens 64 times on 64x64 source pixels at a time.
Computing a tile at a time means looping across the tiles, which would seem to take more time. However, two things work in your favor when doing tile computation. First, tiles can be evaluated on multiple threads concurrently. Second, the transient memory usage is much, much lower than immediate mode computation.
All of which is a long-winded explanation for why you want to use JAI for this type of image processing.
A couple of notes and caveats:
You can defeat tile based rendering without realizing it. Anywhere you've got a BufferedImage in the workstream, it cannot act as a tile source or sink.
If you render to disk using the JAI or JAI Image I/O operators for JPEG, then you're in good shape. If you try to use the JDK's built-in image classes, you'll need all the memory. (Basically, avoid mixing the two types of image manipulation. Immediate mode and deferred mode don't mix well.)
All the fancy stuff with ROIs, tiles, and deferred mode are transparent to the program. You just make API call on the JAI class. You only deal with the machinery if you need more control over things like tile sizes, caching, and concurrency.
Here's a suggestion that might be useful;
Try segregating the two main tasks into individual programs. Your first task is to decide which images go where, and that can be a simple mapping from coordinates to filenames, which can be represented as lines of text:
0,0,image123.jpg
0,1,image542.jpg
.....
After that task is done (and it sounds like you have it well handled), then you can have a separate program handle the compilation.
This compilation could be done by appending to an image, but you probably don't want to mess around with file formats yourself. It's better to let your programming environment do it by using a Java Image object of some sort. The biggest one you can fit in memory pixelwise will be 2GB leading to sqrt(2x10^9) maximum height and width. From this number and dividing by the number of images you have for height and width, you will get the overall pixels per subimage allowed., and can paint them into the appropriate places.
Every time you 'append' are you perhaps implicitly creating a new object with one more pixel to replace the old one (ie, a parallel to the classic problem of repeatedly appending to a String instead of using a StringBuilder) ?
If you post the portion of your code that does the storing and appending, someone will probably help you find an efficient way of recoding it.

Categories

Resources