Fastest Way to Draw a Static Image in Java - java

I'm in the process of writing a custom heatmap generator. I'm wondering what the fastest way is to draw boxes (up to around 1 million) in Java. Most questions I've found have concentrated on dynamic images (like in games), and I'm wondering if there's a better way to go for static images. I've tried using swing (via a GridLayout and adding a colored canvas to each box), drawing directly on the panel with Graphics2D, and also by using the Processing libraries. While Processing is pretty fast and generates a clean image, the window seems to have problems keeping it; it generates different parts of the image whenever you minimize, move the windows, etc.
I've heard of OpenGL, but I've never touched it, and I wanted some feedback as to whether that (or something else) would be a better approach before investing time in it.

For static images, I paint them to a BufferedImage (BI) and then draw that via Graphics2D.
I keep a boolean that tells me whether the BI is up to date. That way I only incur the expensive painting cost once. If you want to get fancy, you can scale the BI to handle minor resizing. For a major resizing you'll probably want to repaint the BI so as not to introduce artifacts. It's also useful for overlaying data (such as cross hairs, the value under the cursor, etc) as you're only painting the BI and the data.

Related

Confused with image scaling and positioning in libgdx

I'm having quite a bit of difficulty wrapping my head around the actual display side of things with libgdx. That is, it just seems fairly jumbled in terms of what needs to be done in order to actually put something up onto the screen. I guess my confusion can sort of be separated into two parts:
What exactly needs to be done in terms of creating an image? There's
Texture, TextureRegion, TextureAtlas, Sprite, Batch, and probably a
few other art related assets that I'm missing. How do these all
relate and tie into each other? What's the "production chain" among
these I guess would be a way of putting it.
In terms of putting
whatever is created from the stuff above onto the monitor or
display, how do the different coordinate and sizing measures relate
and translate to and from each other? Say there's some image X that
I want to put on the screen. IT's got it's own set of dimensions and
coordinates, but then there's also a viewport size (is there a
viewport position?) and a camera position (is there a camera size?).
On top of all that, there's also the overall dispaly size that's
from Gdx.graphics. A few examples of things I might want to do could
be as follow:
X is my "global map" that is bigger than my screen
size. I want to be able to scroll/pan across it. What are the
coordinates/positions I should use when displaying it?
Y is bigger
than my screen size. I want to scale it down and have it always be
in the center of the screen/display. What scaling factor do I use
here, and which coordinates/positions?
Z is smaller than my screen
size. I want to stick it in the upper left corner of my screen and
have it "stick" to the global map I mentioned earlier. Which
positioning system do I use?
Sorry if that was a bunch of stuff... I guess the tl;dr of that second part is just which set of positions/coordinates, sizes, and scales am I supposed to do everything in terms of?
I know this might be a lot to ask at once, and I also know that most of this stuff can be found online, but after sifting through tutorial after tutorial, I can't seem to get a straight answer as to how these things all relate to each other. Any help would be appreciated.
Texture is essentially the raw image data.
TextureRegion allows you to grab smaller areas from a larger texture. For example, it is common practice to pack all of the images for your game/app into a single large texture (the LibGDX “TexturePacker” is a separate program that does this) and then use regions of the larger texture for your individual graphics. This is done because switching textures is a heavy and slow operation and you want to minimize this process.
When you pack your images into a single large image with the TexturePacker it creates a “.atlas” file which stores the names and locations of your individual images. TextureAtlas allows you to load the .atlas file and then extract your original images to use in your program.
Sprite adds position and color capabilities to the texture. Notice that the Texture API has no methods for setting/getting position or color. Sprites will be your characters and other objects that you can actually move around and position on the screen.
Batch/SpriteBatch is an efficient way of drawing multiple sprites to the screen. Instead of making drawing calls for each sprite one at a time the Batch does multiple drawing calls at once.
And hopefully I’m not adding to the confusion, but another I option I really like is using the “Actor” and “Stage” classes over the “Sprite” and “SpriteBatch” classes. Actor is similar to Sprite but adds additional functionality for moving/animating, via the act method. The Stage replaces the SpriteBatch as it uses its own internal SpriteBatch so you do not need to use the SpriteBatch explicitly.
There is also an entire set of UI components (table, button, textfield, slider, progress bar, etc) which are all based off of Actor and work with the Stage.
I can’t really help with question 2. I stick to UI-based apps, so I don’t know the best practices for working with large game worlds. But hopefully someone more knowledgeable in that area can help you with that.
This was to long to reply as a comment so I’m responding as another answer...
I think both Sprite/SpriteBatch and Actor/Stage are equally powerful as you can still animate and move with Sprite/SpriteBatch, but Actor/Stage is easier to work with. The stage has two methods called “act” and “draw” which allows the stage to update and draw every actor it contains very easily. You override the act method for each of your actors to specify what kind of action you want it to do. Look up a few different tutorials for Stage/Actor with sample code and it should become clear how to use it.
Also, I was slightly incorrect before that “Actor” is equivalent to Sprite, because Sprite includes a texture, but Actor by itself does not have any kind of graphical component. There is an extension of Actor called “Image” that includes a Drawable, so the Image class is actually the equivalent to Sprite. Actor is the base class that provides the methods for acting (or “updating”), but it doesn’t have to be graphical. I've used Actors for other purposes such as triggering audio sounds at specific times.
Atlas creates the large Texture containing all of your png files and then allows you to get regions from it for individual png's. So the pipeline for getting a specific png graphic would be Atlas > Region > Sprite/Image. Both Image and Sprite classes have constructors that take a region.

OpenGL work with non-OpenGL drawings?

Can graphics rendered using OpenGL work with graphics rendered not using OpenGL?
I am starting to learn OpenGL, but I am still shy when it comes to actually coding everything in OpenGL, I feel more comfortable drawing them out with JPanel or Canvas. I'm assuming that it wouldn't cause much issue code wise, but displaying it all at the same time could cause issues? Or am I stuck with one or the other?
Integrating OpenGL graphics with another non-OpenGL image or rendering boils down to compositing images. You can take a 2D image and load it as a texture in OpenGL, such that you can then use that texture to paint a surface in OpenGL, or as is suggested by your question, paint a background. Alternatively, you can use framebuffers in OpenGL to render an OpenGL scene to a texture, when can then be converted to a 2D bitmap and combined with another image.
There are limitations to this approach of course. Once an OpenGL scene has been moved to a 2D image, generally you lose all depth (it's possible to preserve depth in an additional channel in the image if you want to do that, but it would involve additional work).
In addition, since presumably you want one image to not simply overwrite the other, you're going to have to include an alpha (transparency) channel in one of your images, so that when you combine them, areas which haven't been drawn will end up showing the underlying image.
However, I would suggest you undertake the effort to simply find one rendering API that serves all your needs. The extra work you do to combine rendering output from two APIs is probably going to be wasted effort in the long run. It's one thing to embed an OpenGL control into an enclosing application that renders many of it's controls using a more conventional API like AWT. On the other hand, it's highly unusual to try to composite output from both OpenGL and another rendering API into the same output area.
Perhaps if you could provide a more concrete example of what kinds of rendering you're talking about, people could offer more helpful advice.
You're stuck with one or the other. You can't put them together.

Java image/map object

Using Java and SWT, I am trying to display a map (provided as an image) and mark points on it. First idea is to use a canvas, draw the image (scaled to the largest possible size for this canvas) and then draw the markings (fixed size) to the scaled coordinates. However I would also like to zoom in and move the image, and would prefer not to develop all this functionality from scratch. However I am not having much luck finding an existing solution, though I would guess there should be something out there.
The criteria would be:
based on SWT (or compatible)
allows exchange of the image (possibly with different sizes)
handles user interaction (selection a point on the image, zooming in/out of the image)
Does anybody know a standard/common solution?
Depending on how complex your system will potentially grow, maybe using GEF is an option.
The whole rendering is done on SWT.Canvas and it provides Zooming/Scrolling/Moving out of the Box.
Downside: Its dependant on RCP, so this is likely to be only an option if you need to build an otherwise complex application - in all other cases GEF will be quite heavyweight to set up.

Java VLCJ canvas and drawing ontop with transparent background

I have spent a bit of time researching about whether it is possible to draw on top of a VLCJ movie within a Java application. I have found a few bits of conflicting advice some saying it is not possible and some referencing articles which have moved on oracle.com.
Can someone clarify if it is or is not possible to draw java2d graphics like rectangles/lines which also have transparent backgrounds so the video stream underneath can be viewed whilst the shapes are present on screen?
If this is not possible with vlcj what would be a good alternative for a linux and windows compatible media player allowing for annotation over a playing video stream? Please note i do not have to be limited to java but something where i can get re-use out of developed drawing routines for multiple platforms would be ideal.
Yes, you can do it. For the normal hardware rendered video player, you need to have at least Java 6u10 (preferably 7) and achieve this by overlaying a transparent JWindow on top of the VLC canvas (it's not too hard to add events to the canvas to check for updates in position / size and then move the overlayed window correspondingly.)
The other way that doesn't involve using overlaid windows is to use a DirectMediaPlayer, where you have access to the framebuffer directly (and can therefore do what you like with the pixels, including wrapping them as textures round 3D objects and so on.) So with this approach, you could simply draw what you wanted onto the frame buffer before rendering it to screen in the way you chose. This is the most flexible approach, but comes with the downside that if you're not very careful about your implementation, you lose all the GPU acceleration and end up crippling the CPU, especially for HD video.
If a simple overlay would do the trick, I'd try that first, and just resort to a DirectMediaPlayer if you have to.

Image caching and performance

I'm currently trying to improve the performances of a map rendering library. In the case of punctual symbols, the library is really often jsut drawing the same image again and again on each location. the drawing process may be really complex, though, because the parametrization of the symbol is really very rich. For each point, I have a tree structure that computes the image about to be drawn. When parameters are not dependant on the data I'm processing, as I said earlier, I just draw a complex symbol several times.
I've tried to implement a caching mechanism. I store the images that have already be drawn, and if I encounter a configuration that has already been met, I get the image and draw it again. The first test I've made is for a very simple symbol. It's a circle whose both shape and interior are filled.
As I know the symbol will be constant in all locations, I cache it and draw it again from the cached image then. That works... but I face two important problems :
The quality of the drawn symbols is hardly damaged.
More problematic : the time needed to render the map is reaally higher with caching than without caching. That's pretty disappointing for a cache ^_^
The core code when the caching mechanism is on is the following :
if(pc.isCached(map)){
BufferedImage bi = pc.getCachedValue(map);
drawCachedImageOnGeometry(g2, sds, fid, selected, mt, the_geom, bi);
} else {
BufferedImage bi = g2.getDeviceConfiguration().createCompatibleImage(200, 200);
Graphics2D tg2 = bi.createGraphics();
graphic.draw(tg2, map, selected, mt, AffineTransform.getTranslateInstance(100, 100));
drawCachedImageOnGeometry(g2, sds, fid, selected, mt, the_geom, bi);
pc.cacheSymbol(map, bi);
}
The only interesting call made in drawCachedImageOnGeometry is
g2.drawRenderedImage(bi, AffineTransform.getTranslateInstance(x-100,y-100));
I've made some attempts to use VolatileImage instances rather than BufferedImage... but that causes deeper problems (I've not been able to be sure that the image will be correctly rendered each time it is needed).
I've made some profiling too and it appears that when using my cache, the operations that take the longest time are the rendering operations made in awt.
That said, I guess my strategy is wrong... Consequently, my questions are :
Are there any efficient way to achieve the goal I've explained ?
More accurately, would it be faster to store the AWT instructions used to draw my symbols and to translate them as needed ? I make the assumption that it may be possible to retrieve the "commands" used to build the symbol... I didn't find many informations about that on the world wide web, though... If it is possible, that would save me both the computation time of the symbol (that can be really complex, as said earlier) and the quality of my symbols.
Thanks in advance for all the informations and resources you'll give me :-)
Agemen.
EDIT : Here are some details about the graphics that can be rendered. According to the symbology model I'm implementing, graphics can be really simple (ie a filled square with its shape) as well as really complex (A Label whose both shape and fill are drawn with hatches, for instance, and even if a halo around it if I want). I want to use a cache because I'm sure that in most configurations I'll be able to :
differenciate the parameters that have been used to draw two different symbols of the same source that are styled with the same style.
be sure that two sources with the same parameters (location excepted) will produce the same symbol for the same style, but at two different locations (only a translation will be needed).
Because of these two points, caching seems to be a good strategy. Moreover, there may be thousands of duplcated symbols to be drawn in the same image.
You are awefully vague about what kind of operations your drawing really entails, so all I can give you are some very general pointers.
1.) Drawing a pre-rendered Image is not necessarily faster than drawing the same Image using Graphics2D operations. It depends a lot on the complexity required to draw the image. As an extreme case consider fillRect() vs. a drawImage() of an Image containing the pre-rendered rectangle (fillRect just writes the destination pixels, where drawImage also needs to copy from a source).
2.) In most cases you never want to mess with VolatileImage directly. BufferedImage takes advantage of VolatileImage automatically unless you mess with the Image DataBuffer. If you have many pre-rendered images you may also run out of accelerated video memory and that degrades image drawing performance.
3.) On-the-fly scaling/rotating etc. of a pre-rendered image can be pretty costly (depending on the platform and current graphics transformations).
4.) The 'compatible Image' you create may not really be compatible with the drawing target. You obtain an image compatible with the default screen device, which may not be compatible with the actual target in a multi monitor setup. You may get better results using the actual target components createImage().
EDIT:
5.) Translating the coordinates of a rendering operation may alter the destination pixels produced. An obvious case is when the coordinates are non-integers (either in the coordinates themselves or indirectly through the AffineTransform set on the graphics). Also, antialiasing of text and possibly other primitives may be influenced slightly by coordinates (subpixel rendering comes to mind).
You could attempt an approach that differentiates on if a symbol is presumably fast or slow to render. The fast ones being rendered directly, while the slow ones are cached. The main problem here is in deciding which ones are fast/slow, I expect this to be non-trivial to decide.
Also, I wonder when you say there are thousands of symbols to be rendered, as I imagine most of them should be clipped away since only a small portion of the graph fits into a Window/Frame? If thats the case, don't bother much with caching. Drawing operations that are completely outside the current clip bounds will be relatively cheap - all the graphics target really does for them is detection if they are completely invisible and when they are just do nothing. If the goal is the produce an image to be saved to disk/printed (whatever) I wouldn't bother much with speeding up the rendering, since this is a relatively rare operation and the actual printing may by far exceed the time needed for rendering the graph anyway.
If none of the above applies to your case, be somewhat careful that your cache does not use more time/memory to decide if a cached version exists than it really saves in rendering time. You also need to take into accound that building a cached image instead of rendering to the target directly does cost you some time if that image is never reused. Caching can only gain you some speed if the image is reused at least once, preferably many more times.
If you build your symbols from primive operations by combining primitve rendering operation objects (like there is a Rectangle, Halo and Text rendering object subclass), you may want to assign each of them a cost indicator and only cache those symbols that exceed some (to be determined) cost threshhold. Also it may be a good idea to implement a hashCode() for each primitive operation and the symbol itself for fast(er) equals detection.

Categories

Resources