i am using this code to scale my canvas around a focus point
private class MyScaleDetector
extends ScaleGestureDetector.SimpleOnScaleGestureListener {
#Override
public boolean onScaleBegin(ScaleGestureDetector detector) {
float focusx=detector.getFocusX();
float focusy=detector.getFocusY();
return super.onScaleBegin(detector);
}
#Override
public boolean onScale(ScaleGestureDetector detector) {
float factor=detector.getScaleFactor();
scale *= factor;
scale = Math.max(0.4f, Math.min(scale, 20.0f));
return true;
}
}
and this is the code i use to scale inside the ondraw method
#Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
canvas.scale(scale,scale,focusx,focusy);
}
my problem is when i start scaling around the focus point (200,200) for example ,at first it start going there but than while increasing the scale it start going away from the point . so my problem is how i scale toward a specific point so that that this point becomes the center of the viewport while scaling .
I think i should use canvas.translate() with it but i don't know how much i should move the x and y position of canvas while scaling .
Edit:the image below summarize what i am trying to say
image
There are several things to bear in mind here:
The ScaleDetector only detects scaling events; this sounds good but is not sufficient by itself to handle pinch/zoom in 2D (eg. in a picture).
The 'focus' returned by the detector is the mid point between the two pointers (fingers); this is less useful since the natural assumption when you start scaling is that the image pixel under your finger will remain under your finger as you zoom. People often move fingers at different speeds (or deliberately keep one finger stationary), which invalidates the use of the mid-point.
You (probably) want to handle rotation too.
The broad approach I have used in the past is:
manually detect first and second touch events; record the coordinate of the two pointers when the second pointer touch event occurs. These points are to logically 'follow' the pointers from now on.
when a motion event occurs that continues to have two pointers down (ie. a two fingered rotation, zoom, or drag), determine the translation, scale and rotation as follows:
the scale is based on the distance between the new points compared to the original points (recorded at the start)
the rotation is based on the difference in the angle between the first two points and the angle between the most recent points
the translation is based arbitrarily on one of the points (usually the first) - just translate based on the pixels between the original point and the latest point
these values can be calculated and stored during the motion event, then used in onDraw() method
Note that in onDraw():
It's important to perform the translation first since you are working in screen pixels for the two reference points.
you can then rotate & scale in any order since the scale is always equal in the X & Y direction
Edit to address more specific request:
If all you really need to do is simply scale around a specific redefined point p (px, py) by an amount s, and keep p visible, then you can do the following:
canvas.translate(px, py);
canvas.scale(s,s);
canvas.translate(-px*s, py*s);
This should keep p in the same pixel-position it was in prior to scaling. This will suffer from all the problems alluded to above as well as the risks of drift due to successive scale events around a moving focus (based on your original code). You are far better off doing what I described above.
Related
Hello I am an inexperienced programmer and this is my first question on Stack Overflow!
I am attempting to implement 'fog of war' in my Java game. This means most of my map begins off black and then as one of my characters moves around parts of the map will be revealed. I have searched around including here and found a few suggestions and tried tweaking them myself. Each of my approaches works, however I run into significant runtime issues with each. For comparison, before any of my fog of war attempts I was getting 250-300 FPS.
Here is my basic approach:
Render my background and all objects on my JPanel
Create a black BufferedImage (fogofwarBI)
Work out which areas of my map need to be visible
Set the relevant pixels on my fogofwarBI to be fully transparent
Render my fogofwarBI, thus covering parts of the screen with black and in transparent sections allowing the background and objects to be seen.
For initialising the buffered image I have done the following in my FogOfWar() class:
private BufferedImage blackBI = loader.loadImage("/map_black_2160x1620.png");
private BufferedImage fogofwarBI = new BufferedImage(blackBI.getWidth(), blackBI.getHeight(), BufferedImage.TYPE_INT_ARGB);
public FogOfWar() {
fogofwarBI.getGraphics().drawImage(blackBI,0,0,null);
}
In each of my attempts I start the character in a middle of 'visible' terrain, ie. in a section of my map which has no fog (where my fogofwarBI will have fully transparent pixels).
Attempt 1: setRGB
First I find the 'new' coordinates in my character's field of vision if it has moved. ie. not every pixel within the character's range of sight, but just the pixels at the edge of his range of vision in the direction he is moving. This is done with a for loop, and will go through up to 400 or so pixels.
I feed each of these x and y coordinates into my FogOfWar class.
I check if these x,y coordinates are already visible (in which case I don't bother doing anything to them to save time). I do this check by maintaining a Set of Lists. Where each List contains two elements: an x and y value. And the Set is a unique set of the coordinate Lists. The Set begins empty, and I will add x,y coordinates to represent transparent pixels. I use the Set to keep the collection unique and because I understand the List.contains function is a fast way of doing this check. And I store the coordinates in a List to avoid mixing up x and y.
If a given x,y position on my fogofwarBI is not currently visible I add set the RBG to be transparent using .setRGB, and add it to my transparentPoints Set so that coordinate will not be edited again in future.
Set<List<Integer>> transparentPoints = new HashSet<List<Integer>>();
public void editFog(int x, int y) {
if (transparentPoints.contains(Arrays.asList(x,y)) == false){
fogofwarBI.setRGB(x,y,0); // 0 is transparent in ARGB
transparentPoints.add(Arrays.asList(x,y));
}
}
I then render it using
public void render(Graphics g, Camera camera) {
g.drawImage(fogofwarBI, 0, 0, Game.v_WIDTH, Game.v_HEIGHT,
camera.getX()-Game.v_WIDTH/2, camera.getY()-Game.v_HEIGHT/2,
camera.getX()+Game.v_WIDTH/2, camera.getY()+Game.v_HEIGHT/2, null);
}
Where I am basically applying the correct part of my fogofwarBI to my JPanel (800*600) based on where my game camera is.
Results:
Works correctly.
FPS of 20-30 when moving through fog, otherwise normal (250-300).
This method is slow due to the .setRGB function, being run up to 400 times each time my game 'ticks'.
Attempt 2: Raster
In this attempt I create a raster of my fogofwarBI to play with the pixels directly in an array format.
private BufferedImage blackBI = loader.loadImage("/map_black_2160x1620.png");
private BufferedImage fogofwarBI = new BufferedImage(blackBI.getWidth(), blackBI.getHeight(), BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = fogofwarBI.getRaster();
DataBufferInt dataBuffer = (DataBufferInt)raster.getDataBuffer();
int[] pixels = dataBuffer.getData();
public FogOfWar() {
fogofwarBI.getGraphics().drawImage(blackBI,0,0,null);
}
My editFog method then looks like this:
public void editFog(int x, int y) {
if (transparentPoints.contains(Arrays.asList(x,y)) == false){
pixels[(x)+((y)*Game.m_WIDTH)] = 0; // 0 is transparent in ARGB
transparentPoints.add(Arrays.asList(x,y));
}
}
My understanding is that the raster is in (constant?) communication with the pixels array, and so I render the BI in the same way as in attempt 1.
Results:
Works correctly.
A constant FPS of around 15.
I believe it is constantly this slow (regardless of whether my character is moving through fog or not) because whilst manipulating the pixels array is quick, the raster is constantly working.
Attempt 3: Smaller Raster
This is a variation on attempt 2.
I read somewhere that constantly resizing a BufferedImage using the 10 input version of .drawImage is slow. I also thought that having a raster for a 2160*1620 BufferedImage might be slow.
Therefore I tried having my 'fog layer' only equal to the size of my view (800*600), and updating every pixel using a for loop, based on whether the current pixel should be black or visible from my standard transparentPoints Set and based on my camera position.
So now my editFog Class just updates the Set of invisible pixels and my render class looks like this:
public void render(Graphics g, Camera camera) {
int xOffset = camera.getX() - Game.v_WIDTH/2;
int yOffset = camera.getY() - Game.v_HEIGHT/2;
for (int i = 0; i<Game.v_WIDTH; i++) {
for (int j = 0; j<Game.v_HEIGHT; j++) {
if ( transparentPoints.contains(Arrays.asList(i+xOffset,j+yOffset)) ) {
pixels[i+j*Game.v_WIDTH] = 0;
} else {
pixels[i+j*Game.v_WIDTH] = myBlackARGB;
}
}
}
g.drawImage(fogofwarBI, 0, 0, null);
}
So I am no longer resizing my fogofwarBI on the fly, but I am updating every single pixel every time.
Result:
Works correctly.
FPS: Constantly 1 FPS - worst result yet!
I guess that any savings of not resizing my fogofwarBI and having it smaller are massively outweighed by updating 800*600 pixels in the raster rather than around 400.
I have run out of ideas and none of my internet searching is getting me any further in trying to do this in a better way. I think there must be a way to do fog of war effectively, but perhaps I am not yet familiar enough with Java or the available tools.
And pointers as to whether my current attempts could be improved or whether I should be trying something else altogether would be very much appreciated.
Thanks!
This is a good question. I am not familar with the awt/swing type rendering, so I can only try to explain a possible solution for the problem.
From a performance standpoint I think it is a better choice to chunk/raster the FOW in bigger sections of the map rather than using a pixelbased system. That will reduce the amount of checks per tick and updating it will also take less resources, as only a small portion of the window/map needs to update. The larger the grid, the less checks, but there is a visual penalty the bigger you go.
Leaving it like that would make the FOW look blocky/pixelated, but its not something you can't fix.
For the direct surrounding of a player, you can add a circle texture with the player at its center. You can than use blending (I believe the term in awt/swing is composite) to 'override' the alpha where the circle overlaps the FOW texture. This way the pixel-based updating is done by the renderingAPI which usually uses hardware enhanced methods to achieve these things. (for custom pixel-based rendering, something like 'shader scripts' are often used if supported by the rendering API)
This is enough if you only need temporary vission in the FOW (if you don't need to 'remember' the map), you don't even need a texture grid for the FOW than, but I suspect you do want to 'remember' the map. So in that case:
The blocky/pixelated look can be fixed like they do with grid-based terain. Basically add a small additional textures/shapes based on the surroundings to make things look nice. The link below provides good examples and a detailed explanation on how to do the 'terrain-transitions' as they are called.
https://www.gamedev.net/articles/programming/general-and-gameplay-programming/tilemap-based-game-techniques-handling-terrai-r934/
I hope this gives a better result. If you cannot get a better result, I would advise switching over to something like OpenGL for the render engine as it is meant for games, while the awt/swing API is primarely used for UI/application rendering.
im trying do develop a Zelda like game. So far i am using bitmaps and everything runs smooth. At this point the camera of the hero is fixed, meaning, that he can be anywhere on the screen.
The problem with that is scaling. Supporting every device and keeping every in perfect sized rects doesnt seem to be that easy :D
To prevent that i need a moving camera. Than i can scale everything to be equally sized on every device. The hero would than be in the middle of the screen for the first step.
The working solution for that is
xCam += hero.moveX;
yCam += hero.moveY;
canvas.translate(xCam,yCam);
drawRoom();
canvas.restore();
drawHero();
I do it like this, because i dont wand to rearrange every tile in the game. I guess that could be too much processing on some devices. As i said, this works just fine. the hero is in the middle of the screen, and the whole room is moving.
But the problem is collision detection.
Here a quick example:
wall.rect.intersects(hero.rect);
Assuming the wall was originally on (0/0) and the hero is on (screenWitdh/2 / screenHeight/2) they should collide on some point.
The problem is, that the x and y of the wall.rect never change. They are (0/0) at any point of the canvas translation, so they can never collide.
I know, that I can work with canvas.getClipBounds() and then use the coordinates of the returned rect to change every tile, but as I mentioned above, I am trying to avoid that plus, the returned rect only works with int values, and not float.
Do you guys know any solution for that problem, or has anyone ever fixed something like this?
Looking forward to your answers!
You can separate your model logic and view logic. Suppose your development dimension for the window is WxH. In this case if your sprite in the model is 100x100 and placed at 0,0, it will cover area from 0,0 to 100, 100. Let's add next sprite (same 100x100 dimension) at 105,0 (basically slightly to the right of the first one), which covers area from 105,0 to 205,100. It is obvious that in the model they are not colliding. Now, as for view if your target device happens to be WxH you just draw the model as it is. If your device has a screen with w = 2*W, h = 2*H, so twice as big in each direction. You just multiply the x and y by w / W and h / H respectively. Therefore we get 2x for x and y, which on screen becomes 1st object - from 0,0 to 200, 200, 2nd object - from 210,0 to 410, 200. As can be seen they are still not colliding. To sum up, separate your game logic from your drawing (rendering) logic.
I think you should have variables holding the player's position on the "map". So you can use this to determine the collision with the non changing wall. It should look something like (depensing on the rest of your code):
canvas.translate(-hero.rect.centerX(), -.rect.centerY());
drawRoom();
canvas.restore();
drawHero();
Generally you should do the calculations in map coordinates, not on screen. For rendering just use the (negative) player position for translation.
I am developing an augmented reality application for android and trying to use openGl to place cubes at locations in the world. My current method can be seen in the code below:
for(Marker ma: ARData.getMarkerlist().values()) {
Log.d("populating", "");
gl.glPushMatrix();
Location maLoc = new Location("loc");
maLoc.setLatitude(ma.lat);
maLoc.setLongitude(ma.lng);
maLoc.setAltitude(ma.alt);
float distance = currentLoc.distanceTo(maLoc);
float bearing = currentLoc.bearingTo(maLoc);
Log.d("distance", String.valueOf(distance));
Log.d("bearing", String.valueOf(bearing));
gl.glRotatef(bearing,0,0,1);
gl.glTranslatef(0,0,-distance);
ma.cube.draw(gl);
gl.glPopMatrix();
}
gl.glRotatef(y, 0, 1, 0);
gl.glRotatef(x, 1, 0, 0);`
Where y is yaw and x is the pitch. currently I am getting a single cube on the screen at a 45 degree angle someway in the distance. It looks like I am getting sensible bearing and distance values. Could it have something to do with the phones orientation? If you need more code let me know.
EDIT: I updated bearing rotation to gl.glRotatef(bearing,0,1,0); I am now getting my cubes mapped horizontally along the screen at different depths. Still no movement using heading and pitch but #Mirkules has identified some reasons why that might be.
EDIT 2: I am now attempting to place the cubes by rotating the matrix by the difference in angle between heading and bearing to a marker. However, all I get is a sort of jittering where the cubes appear to be rendered in a new position and then jump back to there old position. Code as above except for the following:
float angleDiff = bearing - y;
gl.glRotatef((angleDiff),0,1,0);
gl.glTranslatef(0,0,-distance);
bearing and y are both normalised to a 0 - 360 scale. Also, I moveed my "camera rotation" to above the code where I set the markers.
EDIT 3: I have heading working now using, float angleDiff = (bearing + y)/2;. However, I cant seem to get pitch working. I have attempted to use gl.glRotatef(-x,1,0,0); but that doesn't seem to work.
It's tricky to tell exactly what you're trying to do here, but there are a few things that stick out as potential problems.
Firstly, your final two rotations don't seem to actually apply to anything. If these are supposed to represent a movement of the world or camera (which mostly amounts to much the same thing) then they need to happen before drawing anything.
Then your rotations themselves perhaps won't entirely do what you intend.
Your cube is rotated around the Z axis. The usual convention in GL is for the camera to look down the Z axis, with the Y axis being considered 'up'. You can naturally interpret axes however you like, but a rotation around 'Z' would not typically be 'bearing', but 'roll'. 'Bearing' to me would be analogous to 'yaw'.
As you translate along the Z axis, I assume you are trying to position the object by rotating and translating, but obviously if the rotation is around the same axis as you translate along, it won't actually alter the position of the cube - it will always just be directly in front of the camera, spinning on its axis.
I'm not really clear on why you're trying to position the cube like that when it seems like you start off with a more specific location. You could probably directly construct a more appropriate matrix.
Finally, your camera/world rotation is two concatenated rotations around Y and X. You call these pitch and roll, but typically using euler angles for a camera rotation does not result in an intuitive result where terms like pitch and roll make complete sense. It is common to maintain an orientation and apply individual rotations to that in order to update it, rather than attempting to update several dependent rotations.
So yes, I would expect that this code, in the absence of other matrix operations, would likely result in drawing one or more cubes straight ahead which are simply rotated by some angle around the view direction.
I have a map that is being drawn on the screen inside a VisualizationView. I am able to zoom in and out on this map, depending on where I focused inside this view using the touch functionality.
I also toggle between zooming in and drawing on this map. When I draw an arrow on the map and switch back to zooming in. I want the arrow I drawed on the map to stay at the same spot relative to the map I am zooming in and out on. Meaning the x and y pos of this arrow have to be adjusted.
I have all the variables needed I just dont know how to solve this since its math related.
Can someone explain to me how I can solve this math wise. I tried looking on the internet but I could not find a good explanation. Also the map resolution on the Tablet is always 576 x 576.
public void onZoom(final double focusX, final double focusY,
final double factor) {
//pseudocode
Triangle.x = ..
Triangle.y = ..
Triangle.repaint();
//peusocode
}
The code for the zooming is as following straight from the Library.
public void zoom(double focusX, double focusY, double factor) {
synchronized (mutex) {
Transform focus = Transform.translation(toMetricCoordinates((int) focusX, (int) focusY));
double zoom = RosMath.clamp(getZoom() * factor, MINIMUM_ZOOM, MAXIMUM_ZOOM) / getZoom();
transform = transform.multiply(focus).scale(zoom).multiply(focus.invert());
}
}
I hope someone can explain the math behind solving this.
I'm not directly familiar with the library you're using, but to move the location of the arrow (triangle) to its new location you should apply the same transform to it as you do to the map.
Think of it this way: Before zooming, the location of the arrow (the center or head of the arrow, however you define "location") and the point on the map it's pointing to have the same coordinates. After zooming, the point on the map will have moved to a new location and you want the arrow to move to that same new location, thus you need to use the same transform. Like I said, I'm not familiar with the library you're using so I can't tell you exactly how to do that, but that's where you want to be.
Note, however, that you only want to transform one point of the arrow/triangle and draw the other points relative to it. If you transform all 3 you'll end up with the arrow getting larger and smaller as you zoom in and out (which I assume you don't want).
I am developing a small Game in Java, and I'm rewriting the Player Movement system to not be Grid-Based. It is a 2D side-scroller, and what I'm trying to achieve is basic Player Movement, so that when the user presses, and holds, the right Key the Player moves right, and the same for the Left Key. The problem I am having is that the Paint Component in another Class draws the image of the Player on the screen, with positions X and Y stored in a Settings Class. Then a KeyListener Class gets when the user is pressing Right, and adds to the X value (And the same for Left, but minuses 1 every time). This creates a slowly moving Character on the screen, and what I want to do is have him move faster without adding more than 1px each time as it would seem like he was skipping pixels (I've already tried).
I was wondering if there was a better way to go about this, as the code I'm using is bare-minimum, and my outcome would be a smoothly moving Player.
KeyListener Snippet:
public void keyPressed(KeyEvent arg0) {
int key = arg0.getKeyCode();
if(key == 39) { // Right Key
Settings.player_pos_x++;
}else if(key == 37) { // Left Key
Settings.player_pos_x--;
}
main.Game.redo();
}
Drawing User on-screen:
g.drawImage(player_image, Settings.player_pos_x, Settings.player_pos_y, this);
Any help is appreciated, if you need any more information or code please feel free to ask.
Let's try again :)
Double buffering
Quote: http://msdn.microsoft.com/en-us/library/b367a457.aspx
Flicker is a common problem when programming graphics. Graphics operations that require
multiple complex painting operations can cause the rendered images to appear to flicker
or have an otherwise unacceptable appearance.
When double buffering is enabled, all paint operations are first rendered to a memory
buffer instead of the drawing surface on the screen. After all paint operations are
completed, the memory buffer is copied directly to the drawing surface associated with
it. Because only one graphics operation is performed on the screen, the image
flickering associated with complex painting operations is eliminated.
Quote: http://docs.oracle.com/javase/tutorial/extra/fullscreen/doublebuf.html
Suppose you had to draw an entire picture on the screen, pixel by pixel or line by
line. If you were to draw such a thing directly to the screen (using, say,
Graphics.drawLine), you would probably notice with much disappointment that it takes
a bit of time. You will probably even notice visible artifacts of how your picture is
drawn. Rather than watching things being drawn in this fashion and at this pace, most
programmers use a technique called double-buffering.
Perhaps you could write the app to iterate multiple redraws to the right, each only 1 or 2 pixels per keyboard input received. This way, you're kind of artificially setting how fast you want it to move. Otherwise, you'd be limited to how the user has their keyboard iteration speed set up.
Also, java's jPanel is not exactly the place to look for video game efficiency. Just saying; you might want to look to something like openGL for that.
At best, you could optimize it to have a transition buffer outside of the jpanel drawing logic, with the same draw functionality as a jPanel, this way you write to the buffer, then copy the whole buffer during a writeImage call... I don't know if you're already doing that, but I think it avoids flicker or something... read about it a long time ago, it's called double buffering.
While double buffering will probably help you should first address how you are calculating the distance you move each time.
Change in distance (dx) can be defined as the velocity(v) times the change in time(dt), dx = v * dt. From what I can see you are omitting dt (change in time). You should calculate the time difference from the last time the character was moved and multiply that by your velocity. This way if your distance processing code is executed 10 time or 100 times in 10 seconds the character will always move the same distance.
The code will look something like this:
int currentTime = System.nanoTime();
int deltaTime = currentTime - previousTime;
Settings.player_pos_x += velX * deltaTime;
previousTime = currentTime;
With this code you will probably need to significantly increase velX. Typically I would have a constant that I would multiply by the velocity so they are smaller numbers.
Additionally if you want better movement and physics look into JBox2D although it's probably over kill for you right now. Their documentation might also help to understand basic physics.
In order to answer your question about the speed, most games i've seen store a global int for the velocity in each plane (x and y planes), then apply the velocity to the current position rather than simply incrementing it as you have done above.
public int velX = 2;
public int velY = 2;
public void keyPressed(KeyEvent arg0) {
int key = arg0.getKeyCode();
if(key == 39) { // Right Key
Settings.player_pos_x += velX;
}else if(key == 37) { // Left Key
Settings.player_pos_x -= velX;
}
main.Game.redo();
}
Once you have that set up, try adding operations which may increase the velocity (i.e. holding the key down for a long time, or something else);
Also, try to implement a jump operation, where the players Y position goes up, and try to add a gravity field to make it come back down.