Slick2D Graphics Performance Issues Using Large Images - java

I am writing a 2D lunar lander-style game in Java and using the Slick2D library to handle the graphics. I am having a problem handling the background images.
Here is my problem:
I have 3 layers of details to paint on the background behind the spaceship (stars, mountains and land including landing sites). These are repainted each loop as the ship (centre of the screen) moves around.
The images for these layers are 4500 pixels wide by 1440 high. This is mainly to create some sense of variety (stars) and to be sufficiently wide to hold the generated mountains and land (the land includes the landing sites). Mountains and land are generated per turn and are polygons drawn into holding images.
Slick2d (or opengl) is complaining that it cannot handle images of this size - it says it can only handle textures that are 512 x 512 on my development PC. So... if I have been exploring different methods to work around this including:
a. doing polygon clipping in each loop to reduce my polygons (mountains and land) to the displayable screen size (640 x 480), but this seems mathematically challenging, or
b. splitting my layer images into 512x512 tiles and then updating the screen with the tiles, which is an extension of what I already do (wrapping the layers to create an 'infinite' world) so seems more do-able given my abilities.
My first question (or sense-check, really) is am I missing something? My images, although large, are minimal in terms of content, e.g. black background with a few lines on. Is there a way to compress these in Slick2D/opengl or have I missed something to do with settings that means I can make my card handle bigger images? (I'm assuming not, based on what I have read, but hope springs eternal.)
So, assuming I have not missed anything obvious, on to part 2...
As a quick "I might get away with this" workaround, I have reverted to using BufferedImages to hold the layers and then extracting portions of these into Slick2D images and painting these on the screen in each render loop. Doing it this way I am getting about 3 FPS, which is obviously a tad slow for a real-time game.
To create the BufferedImages I am using:
BufferedImage im_stars = new BufferedImage(bWIDTH, bHEIGHT, BufferedImage.TYPE_INT_ARGB);
Graphics2D gr_stars = im_stars.createGraphics();
... and then I draw my content onto them (stars, etc.)
In my render loop a do a bit of maths to work out which chunks of the images I need to display (to cope with wrapping/providing an 'infinite' experience) and then extract the relevant portions of BufferedImage to a Slick2D image(s) as follows:
Image i1_star = Tools.getImage(stars.getStarImg().getSubimage((int) x1, (int) y1, width, height));
g.drawImage(i1_star, 0, 0);
I have written a static helper method to convert my BufferedImage chunk to a Slick2D Image as follows:
protected static Image getImage(BufferedImage bi) {
Image im = null;
try {
im = new Image(BufferedImageUtil.getTexture("", bi));
} catch (IOException ex) {
Logger.getLogger(Tools.class.getName()).log(Level.SEVERE, null, ex);
}
return im;
}
I'm guessing this is a bad way to do things based on the FPS I am getting, although 3 seems very low. I was getting about 25 FPS when I was using code I'd written myself doing the same thing! So, is there an accelerated Slick2D/opengl way to do this that I am missing or am I back to having to either tile my background images or hold them as polygons and develop a polygon clipping routine?

Having done some more research, I have found that my graphics card can support up to 4096 x 4096 pixel images using Slick2D's:
BigImage.getMaxSingleImageSize();
I have reverted to using Slick2D image files with a width no larger than this size in my program and am now getting around 350 FPS so the BufferedImage work-around was definitely a bad idea.

Related

Real time screen recording in a LibGDX screen

In short, I'm making a simulation where I have a bunch of creatures that can see each other. The way I want to do this is to capture an area around each creature and give it to their neural network, and make them evolve to recognize their surroundings. I am coding this using LibGDX, and I don't plan on making screenshots every single frame because I can imagine that that is already a very poor idea. However, the problem is that I don't know how to get the pixels inside a defined square without capturing the entire screen and then cherry picking what I want for each creature, which will cause a MASSIVE lag spike, since the area these creatures will be in is 2000x2000, and therefore 12 million different values (4 million RGB values).
Each creature is about 5 pixels (width and height), so my idea is to give them a 16x16 area around them, which is why iterating through the entire frame buffer won't work, it would pointlessly iterate through millions of values before finding the ones I asked for.
I would also need to be able to take pictures outside of the screen (as in, the part outside the window's boundaries), if that is even possible.
How can I achieve this? I'm aiming for performance, but I do not mind distributing the load between multiple frames or even multithreading.
The problem is you can't query pixels in a framebuffer.
You can capture a texture from a framebuffer, and you can convert a texture to a pixmap.
libgdx TextureRegion to Pixmap
You can then getPixel(int x, int y) against the pixmap.
However, maybe going the other way would be better.
Start with a pixmap, work with the pixmap, and for each frame convert the pixmap to a texture and render that texture fullscreen. This also removes the need for the creatures environment to match the screen resolution (although you could still set it up like that).

Libgdx - how to load a huge spritesheet so it doesn't render black?

I'm facing a performance problem with my mobile game which I write in LibGDX.
Let's assume that there is only one resolution - 1920x1080 - just for testing.
I have a spritesheet combined with images like this below:
I want these particles to fly around my logo, so I need their size to be at least 256px/512px - so it looks good on a given resolution.
If one frame has that size and I need at least 32 frames for it to look good, it's easy to calculate that the whole spritesheet has dimensions:
width: 256 * 8 (columns) = 2048
height: 512 * 4 (rows) = 2048
This is the most optimistic idea, cause the spritesheet should be even bigger.
The weight of the spritesheet is ~50kb so it's really fine, but the dimensions are getting me in trouble.
Yesterday I tested everything on a Desktop version in LibGDX, everything renders fine. I ported the game after finishing the menu to Android and the area where this spritesheet should be drawn is black.
I read on gamedev or here (don't remember) that I should use only graphics with a maximum size od 1024/1024 cause LibGdx has some problems with loading higher textures with Android on many versions.
What I'm trying to accomplish?
I need to find an idea how to make this work and load the texture.
I already tried with resizing the file using a pixmap, but it takes ints as dimensions, it lowers the quality etc...
I know someone would say, why wouldn't you just create a single 'dot' object with an orange graphic like below, spawn these dots randomly and change their alpha sometimes? It's not an option because I need other animations like 'fog' which can't be programmed that easy like dots.
Maybe there is a way to resize a texture, a region or something (using floats of course to keep the aspect ratio)?
If someone has any ideas, how should I use huge spritesheets in my app, I would be very grateful :)
Your sprite sheet should not exceed the length 2048*2048.
If you want to load a spritesheet which is more than 2048*2048,then you have to load per frame separately.
Suppose, if your sprite sheet contains 15 frames then you must have 15 separate png files.

opengl / JOGL - best way to draw textures

I thought about the best way to draw a picture in OpenGL / JOGL.
I currently program a Game and it is my goal to save the information about a picture in a text file instead of saving the picture.
My idea was to program a method that saves every pixel information (RGB) at the position of X and Y.
Then I draw every pixel and it is finished.
What you think about that idea?
You should simply use TextureIO to make a texture from your picture and use this texture with 4 vertices that have some texture coordinates while drawing. glReadPixels() is very slow, reading each pixel of a picture would take a lot of time, saving its content as a text file would require a lot of memory (saving it as a compressed image in a loss-less format like PNG might be worth a try), drawing each pixel one by one would be a lot slower than drawing a texture. derhass is right. You could vectorize your picture (make a SVG from it) but you would have to rasterize it after or you would have to implement some rendering of vectorized contents and it would be probably slower than using a texture. I'm not sure you really need an offscreen buffer.
I had a similar problem when I began working on my first person shooter. I wasn't using JOGL at the very beginning, I reused the source code of someone else, it relied on software rendering in an image, it was very slow. Then, I used JOGL to draw each pixel one by one instead of using Java2D, it was about 4 times faster on my machine but still very slow for me. At the end, I had to redesign the whole rendering to use OpenGL for what it is for as derhass would say, I used triangles, quads and textures. The performance became acceptable and this is what you should do, use OpenGL to draw primitives and clarify what you're trying to achieve so that we can help you a bit better.

Pixels Array Too CPU Intensive

I have been working on a Java 2D game for a little while. It is a raster graphics system with an array of pixels (integers).
private BufferedImage image = new BufferedImage(WIDTH, HEIGHT, BufferedImage.TYPE_INT_RGB);
private int[] pixels = ((DataBufferInt) image.getRaster().getDataBuffer()).getData();
I then create an array of integers separately, manipulate it with the screen objects (rendered from external image files, such as .png), then copy that array into my main one, which is projected on the screen. I found no performance difference (and really didn't expect to) in using array copying methods over iteration. Regardless, the graphics render well and the game is coming along swimmingly.
However, I have found this to be extremely CPU intensive. My activity monitor says that the application is using more than 100% of my CPU. This is obviously because I am iterating through a pixel array (76k integers) each update (60 times per second).
I chose this technique for educational purposes. This is a personal project and I simply wanted to get insight into Java graphics. I am, by no means, married to this rendering technique.
My question comes in three related parts...
Obviously there is a better way to do this. What libraries/frameworks would do it better?
Will those libraries essentially do the same thing (loop through the pixels), just in a more efficient way?
Is there a way I can optimize this technique without using external tools, or is it just not worth it?
OpenGL is most often associated with 3D graphics applications, but it can also be used for 2D applications. Using an orthographic projection and textured quads to take the place of sprites and background images, you can construct the graphics and interface for a game with absolutely no 3D elements at all, despite the fact that the drawing is being done, in a sense, in 3D.
I have no experience using OpenGL in Java. However, I know that it is possible using the LWJGL library and possibly others. You should check it out if you want to drastically improve your graphics performance.
To answer your second question, OpenGL in this application actually works very differently from your approach, using the same techniques it would use to draw textured polygons in 3D rather than simple block image transfers into a framebuffer, though pixel data can be accessed and manipulated.
Here's an example of 2D graphics using OpenGL with C, from a game project I never finished but nevertheless may serve as a good visual example of what I'm talking about. In this case, I did not use an orthographic projection, but rather a perspective matrix to get parallax effects by drawing the various layers at different depths in 3D space.

draw image or draw filled circle?

We have an old Java Swing application. we need to display thousands, hundreds of thousands small circle spots on the canvas based on the real data. Right now we have an image file of a small circle spot. When we need it, we draw that image onto the canvas, thousands, hundreds of thousands times.
Now I am think it may be better (better performance and memory usage) to just draw a filled circle each time instead of load the image and draw it.
how about your opinion?
thanks,
You only need to load the template image once and hold it in memory and copy it to the canvas as needed using Graphics2D drawImage function. Drawing multiple filled circles may become expensive due to calls to the Flood-fill/Scan-fill algorithm as well as Bresenham to draw the circle. To optimize the rendering you can also decimate the rendered result or perform clustering, since the user will not really appreciate dense overlapping circles anyway.
To reduce render calls test the pixel where your template is going and pass a render if it is already coloured.
Here is a nice benchmarking applet.
It is almost certainly much faster to hold a single image and draw it many times than to make a call to draw a filled circle. Here is a recent presentation on the subject, showing that it is faster to draw an image than even a simple horizontal cross. http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-4170&yr=2009&track=javase
Time your code
It is most definitely faster to draw an image lots of times than drawing a circle or String lots of times and it's very easy to test. At the beginning of your paintComponent() method add the line:
paintComponent(){
long start = System.currentTimeMillis();
...
// draw 100,000 circles as images or circles
...
System.out.println("Rendering time: " +
(start - System.currentTimeMillis()) + " ms");
}
If the times turn out to be zero all the time, you can instead use System.nanoTime().
Paint to Cached Image
Another thing you can do is to paint these circles onto an image and only recreate the image when the content changes. If nothing has changed just draw that image onto the Graphics2D object instead of redrawing all of the circles. This is commonly called double buffering. You also can use Volatile Images to take advantage of hardware acceleration.
Create Compatible Images
You should also make sure you use images that are compatible with the user's monitor by using createCompatibleImage() as shown below:
GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
GraphicsDevice gs = ge.getDefaultScreenDevice();
GraphicsConfiguration gc = gs.getDefaultConfiguration();
// Create an image that does not support transparency
BufferedImage bimage = gc.createCompatibleImage(width, height, Transparency.OPAQUE);
// Create an image that supports transparent pixels
bimage = gc.createCompatibleImage(width, height, Transparency.BITMASK);
// Create an image that supports arbitrary levels of transparency
bimage = gc.createCompatibleImage(width, height, Transparency.TRANSLUCENT);
More Tips
I'd recommend the book Filthy Rich Clients. It has lots of great tips for speeding up swing. Especially look at chapters 4 and 5 about images and performance.
I don't now if this would be helpful but you can test which one works for you by testing worst case . But I think filled circle would be best .
A third way to do it is to use the unicode char for filled circle, &#x25CF, since you can bet that rendering thousands of chars (as in: a piece of text) is the most normal thing for any graphics engine.
It's hard to predict which is faster, because certain operations under certain circumstances are accelerated by the GPU hardware of the video card.
If the GPU is used to make the circle, then that would be much faster than the cpu copying pixels of a buffered circle as an image.
There is VolatileImage as well. Perhaps it's possible to make the image blits so that they end up being accelerated.
The only way to find out is to benchmark it yourself.

Categories

Resources