Using a sprite sheet with different sized sprites in Slick2D - java

I searched through the API for SpriteSheet, but I couldn't find anything on how to make a sprite sheet with different sized sprites.
The sprite sheet that I'm using has a row of 16x16px tiles, a row of 24x24px tiles under it, a row of 8x8px under that, and so on.
Originally, not using Slick2D, I used BufferedImage.getSubimage() to obtain each sprite from a temporary BufferedImage of the sprite sheet. Is there a similar method here that I can use?

I don't believe there is a way to do a direct sub-image in the current version of the API, at least at the time of this writing.
However, there are three possible options that I can think of (in addition to the option of just adding said method calls yourself - it's open source after all):
You could instantiate several SpriteSheet objects from the same source Image, one for each Sprite size, if you really want to keep them in the same source file.
You could take the Image instance, and call getSubImage on it to split the Image into three images, one for each size (24x24, 16x16, and so on). Then, from those sub-images, instantiate SpriteSheets.
You could split the source file into separate files based on the size. That is, have your 24x24 sprite cells in one file, your 16x16 in another file, and so on.

You can just keep an Image and use an overload of the Graphics object's drawImage method to specify where to draw which part of the image:
g.drawImage(image, x1, y1, x2, y2, srcX1, srcY1, srcX2, srcY2);
See [javadoc](http://slick.cokeandcode.com/javadoc/org/newdawn/slick/Graphics.html#drawImage(org.newdawn.slick.Image, float, float, float, float, float, float, float, float))
The first parameter is the instance of the image. The next two parameter define the point on screen, where the rendering begins. X2 and y2 define the end point of the rendering. Usually x2 is x1 + spriteWidth and y2 is y1 + spriteHeight, but you can change those values to draw the sprite in different sizes.
The last four parameters work the same, but they define the area of the sprite sheet, that will be drawn on screen.
If we take your example and we want to draw the second tile from the third row the call would look like this:
int tileWidth = 8;
int tileHeight = 8;
int sourceX = 40;
int sourceY = 8; //as its the sec
int drawX = 34;
int drawY = 65;
g.drawImage(image, drawX, drawY, drawX + tileWidth, drawY + tileHeight
, sourceX, sourceY, sourceX + tileWidth, sourceY + tileHeight);
When I work with spritesheet, I have hardcoded values in some (very rare cases, mostly tests) and a sprite class, that has the source x1, x2, y1 and y2 values stored. I can pack a bunch of them in a list or a map and like that I have a sprite index. Usually I generate the definitions somehow and then serialize the list, so I can simply reload that list, if I need it.
Here is a short example of my XML definition (I store the width and height rather then the x2 and y2 values in the xml, as I find it more human readable and more convenient for manual editing. After deserialization I calculate the x2 and y1 values):
<spriteSheet imageName="buildings" name="buildings">
<sprite name="1x2 industry 01" spriteX="0" spriteY="0" width="50" height="112"/>
<sprite name="1x2 quarters 01" spriteX="50" spriteY="0" width="50" height="112"/>
<sprite name="1x1 spaceport 01" spriteX="243" spriteY="112" width="51" height="56"/>
...
</spriteSheet>

Related

How to draw star in java swing using fillPolygon

I'm having trouble setting the coordinate of the star are there any better solution for this. I cannot get the the correct shape. Can someone help me on this?
public void star(Graphics shapes)
{
shapes.setColor(color);
int[] x = {42,52,72,52,60,40,15,28,9,32,42};
int [] y = {38,62,68,80,105,85,102,75,58,20,38};
shapes.fillPolygon(x, y, 5);
}
Sun's implementation provides some custom Java 2D shapes like Rectangle, Oval, Polygon etc. but it's not enough. There are GUIs which require more custom shapes like Regular Polygon, Star and Regular polygon with rounded corners. The project provides some more shapes often used. All the classes implements Shape interface which allows user to use all the usual methods of Graphics2D like fill(), draw(), and create own shapes by combining them.
Regular Polygon Star
Edit:
Link
Honestly, I'd use the 2D Graphics shapes API, they allow you to "draw" a shape, which is simpler (IMHO) then using polygon. The advantage is, they are easy to paint and transform
Having said that, the problem you're actually having is the fact that you're not passing the right information to the fillPolygon method.
If you take a look at the JavaDocs for Graphics#fillPolygon, you'll note that the last parameter is the number of points:
nPoints - a the total number of points.
But you're passing 5, where there are actually 11 points in your array
Something like...
shapes.setColor(color);
int[] x = {42,52,72,52,60,40,15,28,9,32,42};
int [] y = {38,62,68,80,105,85,102,75,58,20,38};
shapes.fillPolygon(x, y, 11);
should now draw all the points, but some of your coordinates are slightly off, so you might want to check that
The second to last number of your Y should be 60 not 20
g2.setColor(color);
int[] x = {42,52,72,52,60,40,15,28,9,32,42};
int[] y = {38,62,68,80,105,85,102,75,58,60,38};
g2.fillPolygon(x , y, 11);
I'm having trouble setting the coordinate of the star are there any better solution for this
Check out Playing With Shapes. You should be able to use the ShapeUtils class to generate your shape.
This class will generate the points for you so you don't need to manage every pixel.
a star has 10 points ppl mind that not 11
setBackground(Color.black);
int[]x={250,150,0,150,100,250,400,350,500,350};
int[]y={100,200,200,300,400,300,400,300,200,200};
g.fillPolygon( (x),(y),10);
setForeground(Color.cyan);
this will help to draw a star with black bg and cyan foreground

Calculate correct width of a text

I need to read a plan exported by AutoCAD to PDF and place some markers with text on it with PDFBox.
Everything works fine, except the calculation of the width of the text, which is written next to the markers.
I skimmed through the whole PDF specification and read in detail the parts, which deal with the graphic and the text, but to no avail. As far as I understand, the glyph coordinate space is set up in a 1/1000 of the user coordinate space. Hence the width need to be scale up by 1000, but it's still a fraction of the real width.
This is what I am doing to position the text:
float textWidth = font.getStringWidth(marker.id) * 0.043f;
contentStream.beginText();
contentStream.setTextScaling(1, 1, 0, 0);
contentStream.moveTextPositionByAmount(
marker.endX + marker.getXTextOffset(textWidth, fontPadding),
marker.endY + marker.getYTextOffset(fontSize, fontPadding));
contentStream.drawString(marker.id);
contentStream.endText();
The * 0.043f works as an approximation for one document, but fails for the next.
Do I need to reset any other transformation matrix except the text matrix?
EDIT: A full idea example project is on github with tests and example pdfs: https://github.com/ascheucher/pdf-stamp-prototype
Thanks for your help!
Unfortunately the question and comments merely include (by running the sample project) the actual result for two source documents and the description
The annotating text should be center aligned on the top and bottom marker, aligned to the left on the right marker and aligned to the right on the left marker. The alignment is not working for me, as the font.getSTringWidth( .. ) returns only a fraction of what it seems to be. And the discrepance seems to be different in both PDFs.
but not a concrete sample discrepancy to repair.
There are several issues in the code, though, which may lead to such observations (and other ones, too!). Fixing them should be done first; this may already resolve the issues observed by the OP.
Which box to take
The code of the OP derives several values from the media box:
PDRectangle pageSize = page.findMediaBox();
float pageWidth = pageSize.getWidth();
float pageHeight = pageSize.getHeight();
float lineWidth = Math.max(pageWidth, pageHeight) / 1000;
float markerRadius = lineWidth * 10;
float fontSize = Math.min(pageWidth, pageHeight) / 20;
float fontPadding = Math.max(pageWidth, pageHeight) / 100;
These seem to be chosen to be optically pleasing in relation to the page size. But the media box is not, in general, the final displayed or printed page size, the crop box is. Thus, it should be
PDRectangle pageSize = page.findCropBox();
(Actually the trim box, the intended dimensions of the finished page after trimming, might even be more apropos; the trim box defaults to the crop box. For details read here.)
This is not relevant for the given sample documents as they do not contain explicit crop box definitions, so the crop box defaults to the media box. It might be relevant for other documents, though, e.g. those the OP could not include.
Which PDPageContentStream constructor to use
The code of the OP adds a content stream to the page at hand using this constructor:
PDPageContentStream contentStream = new PDPageContentStream(doc, page, true, true);
This constructor appends (first true) and compresses (second true) but unfortunately it continues in the graphics state left behind by the pre-existing content.
Details of the graphics state of importance for the observations at hand:
Transformation matrix - it may have been changed to scale (or rotate, skew, move ...) any new content added
Character spacing - it may have been changed to put any new characters added nearer to or farther from each other
Word spacing - it may have been changed to put any new words added nearer to or farther from each other
Horizontal scaling - it may have been changed to scale any new characters added
Text rise - it may have been changed to displace any new characters added vertically
Thus, a constructor should be chosen which also resets the graphics state:
PDPageContentStream contentStream = new PDPageContentStream(doc, page, true, true, true);
The third true tells PDFBox to reset the graphics state, i.e. to surround the former content with a save-state/restore-state operator pair.
This is relevant for the given sample documents, at least the transformation matrix is changed.
Setting and using the CalRGB color space
The OP's code sets the stroking and non-stroking color spaces to a calibrated color space:
contentStream.setStrokingColorSpace(new PDCalRGB());
contentStream.setNonStrokingColorSpace(new PDCalRGB());
Unfortunately new PDCalRGB() does not create a valid CalRGB color space object, its required WhitePoint value is missing. Thus, before selecting a calibrated color space, initialize it properly.
Thereafter the OP's code sets the colors using
contentStream.setStrokingColor(marker.color.r, marker.color.g, marker.color.b);
contentStream.setNonStrokingColor(marker.color.r, marker.color.g, marker.color.b);
These (int, int, int) overloads unfortunately use the RG and rg operators implicitly selecting the DeviceRGB color space. To not overwrite the current color space, use the (float[]) overloads with normalized (0..1) values instead.
While this is not relevant for the observed issue, it causes error messages by PDF viewers.
Calculating the width of a drawn string
The OP's code calculates the width of a drawn string using
float textWidth = font.getStringWidth(marker.id) * 0.043f;
and the OP is surprised
The * 0.043f works as an approximation for one document, but fails for the next.
There are two factors building this "magic" number:
As the OP has remarked the glyph coordinate space is set up in a 1/1000 of the user coordinate space and that number is in glyph space, thus a factor of 0.001.
As the OP has ignored he wants the width for the string using the font size he selected. But the font object has no knowledge of the current font size and returns the width for a font size of 1. As the OP selects the font size dynamically as Math.min(pageWidth, pageHeight) / 20, this factor varies. In case of the two given sample documents about 42 but probably totally different in other documents.
Positioning text
The OP's code positions the text like this starting from identity text matrices:
contentStream.moveTextPositionByAmount(
marker.endX + marker.getXTextOffset(textWidth, fontPadding),
marker.endY + marker.getYTextOffset(fontSize, fontPadding));
using methods getXTextOffset and getYTextOffset:
public float getXTextOffset(float textWidth, float fontPadding) {
if (getLocation() == Location.TOP)
return (textWidth / 2 + fontPadding) * -1;
else if (getLocation() == Location.BOTTOM)
return (textWidth / 2 + fontPadding) * -1;
else if (getLocation() == Location.RIGHT)
return 0 + fontPadding;
else
return (textWidth + fontPadding) * -1;
}
public float getYTextOffset(float fontSize, float fontPadding) {
if (getLocation() == Location.TOP)
return 0 + fontPadding;
else if (getLocation() == Location.BOTTOM)
return (fontSize + fontPadding) * -1f;
else
return fontSize / 2 * -1;
}
In case of getXTextOffset I doubt that adding fontPadding for Location.TOP and Location.BOTTOM makes sense, especially in the light of the OP's desire
The annotating text should be center aligned on the top and bottom marker
For the text to be centered it should not be shifted off-center.
The case of getYTextOffset is more difficult. The OP's code is built upon two misunderstandings: It assumes
that the text position selected by moveTextPositionByAmount is the lower left, and
that the font size is the character height.
Actually the text position is positioned on the base line, the glyph origin of the next drawn glyph will be positioned there, e.g.
Thus, the y positioned either has to be corrected to take the descent into account (for centering on the whole glyph height) or only use the ascent (for centering on the above-baseline glyph height).
And a font size does not denote the actual character height but is arranged so that the nominal height of tightly spaced lines of text is 1 unit for font size 1. "Tightly spaced" implies that some small amount of additional inter-line space is contained in the font size.
In essence for centering vertically one has to decide what to center on, whole height or above-baseline height, first letter only, whole label, or all font glyphs. PDFBox does not readily supply the necessary information for all cases but methods like PDFont.getFontBoundingBox() should help.

Data Structure and Algorithm for a 3D Volume?

I've been tinkering with some Minecraft Bukkit plugin development, and am currently working on something where I need to be able to define a "volume" of space and determine when an entity (player) moves from outside that volume to inside (or vice versa).
If I restrict the "volume" to boxes, it should be simple. The data structure can just maintain the X/Y/Z bounding integers (so 6 total integers) and calculating entry/exit given two points (movement from and movement to) should just be a matter of determining if A) all three To values are within all three ranges and B) at least one From value is outside its corresponding range.
(Though if there's a better, more performant way of storing and calculating this, I'm all ears.)
However, what if the "volume" isn't a simple box? Suppose I have an oddly-shaped room and want to enclose the volume of that room. I could arrange multiple "volumes" individually to fill the overall space, however that would result in false positives when an entity moves from one to another.
Not having worked in gaming or 3D engines before, I'm drawing a blank on how I might be able to structure something like this. But it occurs to me that this is likely a problem which has been solved and has known established patterns. Essentially, I'm trying to:
Define a data structure which can represent an oddly-shaped volume of space (albeit at least based on block coordinates).
Define an algorithm which, given a source and destination of movement, can determine if the movement crossed a boundary of the defined space.
Are there established patterns and practices for this?
I don't know if this has been used in any kind of video game before, but the first thing that came to mind is the classic Sieve of Eratosthenes implementation, the only change would be to make the boolean array 3D, and use the keys as coordinates. Obviously though as x and y values can be huge in Minecraft, you'd probably want to save space by saving an offset between the world 0,0 position and your selection, something like this:
class OddArea
{
static final int MAX_SELECTION_SIZE = 64; //Or whatever
public final int xOffset, yOffset;
// 256 = Chunk height
public final boolean[][][] squares = new boolean[MAX_SELECTION_SIZE][MAX_SELECTION_SIZE][256];
OddArea()
{
this(0, 0);
}
OddArea(final int xOffset, final int yOffset)
{
this.xOffset = xOffset;
this.yOffset = yOffset;
}
void addBlock(final int x, final int y, final int z)
{
this.squares[x - this.xOffset][y - this.yOffset][z] = true;
}
boolean isInsideArea(final int x, final int y, final int z)
{
return this.squares[x - this.xOffset][y - this.yOffset][z];
}
}
z doesn't require an offset as the Minecraft world is only 256 blocks high.
The only issue I can think of with this setup is you'd have to know the lowest x,y coordinates before you start filling up your object
In general you should be using a data structure similar to kd trees. You can represent your volume as a union of either cubes or spheres, and it should be easy to evaluate if an object enters the volume.
BTW, to calculate if two spheres intersect, check if the distance between centers is less than sum of radii.

Java bufferstrategy graphics or integer array

When doing 2D game development in Java, most tutorials create a bufferstrategy to render. This makes perfect sense.
However, where people seem to skew off is the method of drawing the actual graphics to the buffer.
Some of the tutorials create a buffered image, then create an integer array to represent the individual pixel colors.
private BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
private int[] pixels = ((DataBufferInt) image.getRaster().getDataBuffer()).getData();
Graphics g = bs.getDrawGraphics();
g.setColor(new Color(0x556B2F));
g.fillRect(0, 0, getWidth(), getHeight());
g.drawImage(image, 0, 0, getWidth(), getHeight(), null);
However some other tutorials don't create the buffered image, drawing the pixels to an int array, and instead use the Graphics component of the BufferStrategy to draw their images directly to the buffer.
Graphics g = bs.getDrawGraphics();
g.setColor(new Color(0x556B2F));
g.fillRect(0, 0, getWidth(), getHeight());
g.drawImage(testImage.image, x*128, y*128, 128, 128, null);
I was just wondering, why create the entire int array, then draw it. This requires a lot more work in implementing rectangles, stretching, transparency, etc. The graphics component of the buffer strategy already has methods which can easily be called.
Is there some huge performance boost of using the int array?
I've looked this up for hours, and all the sites I've seen just explain what they're doing, and not why they chose to do it that way.
Lets be clear about one thing: both snippets of code do exactly the same thing - draw an Image. The snippets are rather incomplete however - the second snippet does not show what 'testImage.image' actually is or how it is created. But they both ultimately call Graphics.drawImage() and all variants of drawImage() in either Graphics or Graphics2D draw an Image, plain and simple. In the second case we simply don't know if it is a BufferedImage, a VolatileImage or even a Toolkit Image.
So there is no difference in drawing actually illustrated here!
There is but one difference between the two snippets - the first one also obtains a direct reference to the integer array that is ultimately internally backing the Image instance. This gives direct access to the pixel data rather than having to go through the (Buffered)Image API of using for example the relatively slow getRGB() and setRGB() methods. The reason why to do that can't be made specific in the context is in this question, the array is obtained but never ever used in the snippet. So in order to give the following explanation any reason to exist, we must make the assumption that someone wants to directly read or edit the pixels of the image, quite possibly for optimization reasons given the "slowness" of the (Buffered)Image API to manipulate data.
And those optimization reasons may be a premature optimization that can backfire on you.
Firs of all, this code only works because the type of the image is INT_RGB which will give the image an IntDataBuffer. If it has been another type of image, ex 3BYTE_BGR, this code will fail with a ClassCastException since the backing data buffer won't be an IntDataBuffer. This may not be much of a problem when you only manually create images and you enforce a specific type, but images tend to be loaded from files created by external tools.
Secondly, there is another bigger downside to directly accessing the pixel buffer: when you do that, Java2D will refuse acceleration of that image since it cannot know when you will be making changes to it outside of its control. Just for clarity: acceleration is the process of keeping an unaltered image in video memory rather than copying it from system memory each time it is drawn. This is potentially a huge performance improvement (or loss if you break it) depending on how many images you work with.
How can I create a hardware-accelerated image with Java2D?
(As that related question shows you: you should use GraphicsConfiguration.createCompatibleImage() to construct BufferedImage instances).
So in essence: try to use the Java2D API for everything, don't access buffers directly. This off-site resource gives a good idea just what features the API has to support you in that without having to go low level:
http://www.pushing-pixels.org/2008/06/06/effective-java2d.html
First of all, there are lots of historical aspects. Early API was very basic, so the only way to do anything non-trivial was to implement all required primitives.
Raw data access is a bit old-fashioned and we can try to do some "archeology" to find the reason such approach was used. I think there are two main reasons:
1. Filter effects
Let's not forget filter effects (various kinds of blurs, etc) are simple, very important for any game developer and widely used.
The simples way to implement such an effect with Java 1 was to use int array and filter defined as a matrix. Herbert Schildt, for example, used to have lots of such demos:
public class Blur {
public void convolve() {
for (int y = 1; y < height - 1; y++) {
for (int x = 1; x < width - 1; x++) {
int rs = 0;
int gs = 0;
int bs = 0;
for (int k = -1; k <= 1; k++) {
for (int j = -1; j <= 1; j++) {
int rgb = imgpixels[(y + k) * width + x + j];
int r = (rgb >> 16) & 0xff;
int g = (rgb >> 8) & 0xff;
int b = rgb & 0xff;
rs += r;
gs += g;
bs += b;
}
}
rs /= 9;
gs /= 9;
bs /= 9;
newimgpixels[y * width + x] = (0xff000000
| rs << 16 | gs << 8 | bs);
}
}
}
}
Naturally, you can implement that using getRGB, but raw data access is way more effective. Later, Graphics2D provided better abstraction layer:
public interface BufferedImageOp
This interface describes
single-input/single-output operations performed on BufferedImage
objects. It is implemented by AffineTransformOp, ConvolveOp,
ColorConvertOp, RescaleOp, and LookupOp. These objects can be passed
into a BufferedImageFilter to operate on a BufferedImage in the
ImageProducer-ImageFilter-ImageConsumer paradigm.
2. Double buffering
Another problem was related to flickering and really slow drawing. Double buffering eliminates ugly flickering and all of a sudden it provides an easy way to do filtering effects, because you have buffer already.
Something like a final conclusion :)
I would say the situation you've described is pretty common for any evolving technology. There are two ways to achieve same goals:
use legacy approach, code more, etc
rely on new abstraction layers, provided techniques, etc
There are also some useful extensions to simplify your life even more, so no need to use int[] :)

Comparing two images for motion detecting purposes

I've started differentiating two images by counting the number of different pixels using a simple algorithm:
private int returnCountOfDifferentPixels(String pic1, String pic2)
{
Bitmap i1 = loadBitmap(pic1);
Bitmap i2 = loadBitmap(pic2);
int count=0;
for (int y = 0; y < i1.getHeight(); ++y)
for (int x = 0; x < i1.getWidth(); ++x)
if (i1.getPixel(x, y) != i2.getPixel(x, y))
{
count++;
}
return count;
}
However this approach seems to be inefficient in its initial form, as there is always a very high number of pixels which differ even in very similar photos.
I was thinking of a way of to determine if two pixels are really THAT different.
the bitmap.getpixel(x,y) from android returns a Color object.
How can I implement a proper differentiation between two Color objects, to help with my motion detection?
You are right, because of noise and other factors there is usually a lot of raw pixel change in a video stream. Here are some options you might want to consider:
Blurring the image first, ideally with a Gaussian filter or with a simple box filter. This just means that you take the (weighted) average over the neighboring pixel and the pixel itself. This should reduce the sensor noise quite a bit already.
Only adding the difference to count if it's larger than some threshold. This has the effect of only considering pixels that have really changed a lot. This is very easy to implement and might already solve your problem alone.
Thinking about it, try these two options first. If they don't work out, I can give you some more options.
EDIT: I just saw that you're not actually summing up differences but just counting different pixels. This is fine if you combine it with Option 2. Option 1 still works, but it might be an overkill.
Also, to find out the difference between two colors, use the methods of the Color class:
int p1 = i1.getPixel(x, y);
int p2 = i2.getPixel(x, y);
int totalDiff = Color.red(p1) - Color.red(p2) + Color.green(p1) - Color.green(p2) + Color.blue(p1) - Color.blue(p2);
Now you can come up with a threshold the totalDiff must exceed to contribute to count.
Of course, you can play around with these numbers in various ways. The above code for example only computes changes in pixel intensity (brightness). If you also wanted to take into account changes in hue and saturation, you would have to compute totalDifflike this:
int totalDiff = Math.abs(Color.red(p1) - Color.red(p2)) + Math.abs(Color.green(p1) - Color.green(p2)) + Math.abs(Color.blue(p1) - Color.blue(p2));
Also, have a look at the other methods of Color, for example RGBToHSV(...).
I know that this is essentially very similar another answer here but I think be restating it in a different form it might prove useful to those seeking the solution. This involves have more than two images over time. If you only literally then this will not work but an equivilent method will.
Do the history for all pixels on each frame. For example, for each pixel:
history[x, y] = (history[x, y] * (w - 1) + get_pixel(x, y)) / w
Where w might be w = 20. The higher w the larger the spike for motion but the longer motion has to be missing for it to reset.
Then to determine if something has changed you can do this for each pixel:
changed_delta = abs(history[x, y] - get_pixel(x, y))
total_delta += changed_delta
You will find that it stabilizes most of the noise and when motion happens you will get a large difference. You are essentially taking many frames and detecting motion from the many against the newest frame.
Also, for detecting positions of motion consider breaking the image into smaller pieces and doing them individually. Then you can find objects and track them across the screen by treating a single image as a grid of separate images.

Categories

Resources