Create texture directly from volatileImage using Jogl - java

I'm creating a java implantation of http://alteredqualia.com/visualization/evolve/, as a hobby project. I'm using HW-accelerated Graphics2D to draw the polygons on a volatile image, I then want to create a texture from the volatileImage so I can use glReadPixels to compare the generated image to the original (which is also a texture).
I spent the last 2 hours spitting through various Textures documentations, but there doesn't seem to an easy way to create a texture out of a volatileImage. Did I miss something here, or is this just not possible? I know you can convert the volatileImage to a BufferedImage and then create the Texture, but this method is very slow. Which is a bad thing considering performance is crucial for this program.

There is no direct way because a VolatileImage has no API for obtaining the image data, except by making a copy using snapshot().
In practice, simply use a BufferedImage from the start - there is some magic under the hood of BufferedImage that will make use of hardware acceleration where possible. One thing you must avoid is obtaining a reference of a BufferedImage's DataBuffer, that may break the acceleration.

Related

OpenGL work with non-OpenGL drawings?

Can graphics rendered using OpenGL work with graphics rendered not using OpenGL?
I am starting to learn OpenGL, but I am still shy when it comes to actually coding everything in OpenGL, I feel more comfortable drawing them out with JPanel or Canvas. I'm assuming that it wouldn't cause much issue code wise, but displaying it all at the same time could cause issues? Or am I stuck with one or the other?
Integrating OpenGL graphics with another non-OpenGL image or rendering boils down to compositing images. You can take a 2D image and load it as a texture in OpenGL, such that you can then use that texture to paint a surface in OpenGL, or as is suggested by your question, paint a background. Alternatively, you can use framebuffers in OpenGL to render an OpenGL scene to a texture, when can then be converted to a 2D bitmap and combined with another image.
There are limitations to this approach of course. Once an OpenGL scene has been moved to a 2D image, generally you lose all depth (it's possible to preserve depth in an additional channel in the image if you want to do that, but it would involve additional work).
In addition, since presumably you want one image to not simply overwrite the other, you're going to have to include an alpha (transparency) channel in one of your images, so that when you combine them, areas which haven't been drawn will end up showing the underlying image.
However, I would suggest you undertake the effort to simply find one rendering API that serves all your needs. The extra work you do to combine rendering output from two APIs is probably going to be wasted effort in the long run. It's one thing to embed an OpenGL control into an enclosing application that renders many of it's controls using a more conventional API like AWT. On the other hand, it's highly unusual to try to composite output from both OpenGL and another rendering API into the same output area.
Perhaps if you could provide a more concrete example of what kinds of rendering you're talking about, people could offer more helpful advice.
You're stuck with one or the other. You can't put them together.

Pixels Array Too CPU Intensive

I have been working on a Java 2D game for a little while. It is a raster graphics system with an array of pixels (integers).
private BufferedImage image = new BufferedImage(WIDTH, HEIGHT, BufferedImage.TYPE_INT_RGB);
private int[] pixels = ((DataBufferInt) image.getRaster().getDataBuffer()).getData();
I then create an array of integers separately, manipulate it with the screen objects (rendered from external image files, such as .png), then copy that array into my main one, which is projected on the screen. I found no performance difference (and really didn't expect to) in using array copying methods over iteration. Regardless, the graphics render well and the game is coming along swimmingly.
However, I have found this to be extremely CPU intensive. My activity monitor says that the application is using more than 100% of my CPU. This is obviously because I am iterating through a pixel array (76k integers) each update (60 times per second).
I chose this technique for educational purposes. This is a personal project and I simply wanted to get insight into Java graphics. I am, by no means, married to this rendering technique.
My question comes in three related parts...
Obviously there is a better way to do this. What libraries/frameworks would do it better?
Will those libraries essentially do the same thing (loop through the pixels), just in a more efficient way?
Is there a way I can optimize this technique without using external tools, or is it just not worth it?
OpenGL is most often associated with 3D graphics applications, but it can also be used for 2D applications. Using an orthographic projection and textured quads to take the place of sprites and background images, you can construct the graphics and interface for a game with absolutely no 3D elements at all, despite the fact that the drawing is being done, in a sense, in 3D.
I have no experience using OpenGL in Java. However, I know that it is possible using the LWJGL library and possibly others. You should check it out if you want to drastically improve your graphics performance.
To answer your second question, OpenGL in this application actually works very differently from your approach, using the same techniques it would use to draw textured polygons in 3D rather than simple block image transfers into a framebuffer, though pixel data can be accessed and manipulated.
Here's an example of 2D graphics using OpenGL with C, from a game project I never finished but nevertheless may serve as a good visual example of what I'm talking about. In this case, I did not use an orthographic projection, but rather a perspective matrix to get parallax effects by drawing the various layers at different depths in 3D space.

Drawing textured polygons with libgdx

I'm having a problem with my rendering cycle using libgdx, basically I need to fill an area with a square texture, and the last part of this area may be smaller or with a different shape than the texture, so it means that i need to render a quad of arbitrary form and slap the texture on it, cutting the parts I don't need.
I'm a bit lost on how to do this, so far I've seen that the PolygonRegion and PolygonSpriteBatch might do it for me, but I'm a bit wary of instancing a new heavy object I'll use only on one object.
Is there any alternative? Perhaps the Mesh class but i'd like to be certain.
I suggest using a Mesh to define exactly what region you want. Defining the vertex points and mapping those to the texture coordinates is a bit fiddly, but its good to know what's going on underneath some of the higher level APIs (like the *Batch bits). Additionally, the *Batch APIs are designed to share the weight of uploading a single texture across multiple objects, which sounds like it might not apply in this case. (On the other hand, even if the Batch objects are a bit "heavyweight", they may not actually be a problem in practice.)
Another approach to consider is to render the object as a square mesh, but to define your texture with transparent pixels for all the pixels outside the region. (I'm assuming the non-square shape is something you can know offline, and isn't dynamic.)
It isn't a big problem if you instantiate PolygonSpriteBatch for that purpose. The object mainly contains geometric data for buffered geometry. Of course you will need to care about correct rendering order calling flush or end when needed.
Mesh is another option but it can be a bit more work because you need to provide vertices and texture coordinates there manually.
From performance point of view rendering of one sprite is slightly faster with Mesh. I'm not sure if difference affects fps somehow in your case.
EDIT: forgot to mention, if you use SpriteBatch for rendering one object, don't use default constructor it reserves a lot of memory.

Blitting in Java

I do not know java (usually write in c)
How can I do efficiently some way of blitting
pixel array content onto a window in java?
I need (in loop) blit pixels[][] onto a window
I could use something like
pixels[][] -> MemoryImageSource -> Image -> drawImage
but creating and deleting MemoryImageSource and Image
in every frame seems strange to me - how it can be
done simply and reasonably efficiently? Could someone
give a code example, tnx
Normally in Java it's easier to work with the native Image types and use their derived graphics. Behind the scenes Java uses blits as well, so the higher level abstractions is made to easen the workload.
But if there's no way to abstract on the pixel data you can use Raster and WritableRaster (where you can replace portions of the array) as an alternative to your solution. These rasters can be used with a BufferedImage which then can be drawn using the drawImage method you mentioned. I found one way of doing it here which basically creates the Image and then retrieves the raster for future manipulation.
int x, y = 100;
BufferedImage image = new BufferedImage(x, y, BufferedImage.TYPE_INT_RGB);
WritableRaster raster = image.getRaster();
That raster (or just small areas of it) can then be manipulated and repainted.
This might improve performance slightly since the distance from you pixel-array to the screen is shorter. But I think very few people fully understands the entire depths of the AWT api - and it all depends on the native implementations of course - so my idea contains a healthy part of speculation ;-)
But I hope it helped..
For speed, you can pre-compute variations of the ColorModel, as shown in this example.

Resizing an image using OpenGL

I'd like to resize an image using OpenGL. I'd prefer it in Java, but C++ is OK, too.
I am really new to the whole thing so here's the process in words as I see it:
load the image as a texture into OGL
set some stuff, regarding state & filtering
draw the texture in different size onto another texture
get the texture data into an array or something
Do you think if it would be faster to use OpenGL and the GPU than using a CPU-based BLIT library?
Thanks!
Instead of rendering a quad into the destination FBO, you can simply use hardware blit functionality: glBlitFramebuffer. Its arguments are straight forward, but it requires a careful preparation of your source and destination FBO's:
ensure FBO's are complete (glCheckFramebufferStatus)
set read FBO target and write FBO target independently (glBindFramebuffer)
set draw buffers and read buffers (glDraw/ReadBuffers)
call glBlitFramebuffer, setting GL_LINEAR filter in the argument
I bet it will be much faster on GPU, especially for large images.
Depends, if the images are big, it might be faster using OpenGL. But if it's just doing the resize process and no more processing on the GPU side, then it's not worth it as is very likely that is going to be slower than the CPU.
But if you need to resize the image, and you can implement further processing in OpenGL, then is a worthy idea.

Categories

Resources