I am trying develop something from my research work using android. But I am not a coding person, so I am having hard time figuring out how to do things. I figured out ways to acheive my functionality but I am kinda struck with a issue which I could not resolve on my own. It would be great if you guys could help me with it.
I am trying to display a image that is bigger than the screen size and make it to play a sound or vibrate when I touch a particular colored pixel within the image. I was able to perform this for the first instance of the image(i.e., the image displayed once I start my application), but as soon as I pan it doesn't work. For example, my image has a green color pixel in the middle of the screen and after I pan it moved to the left. I am making it to vibrate once I touch the green pixel. The device vibrates when the green is i center, but after I pan it is not getting updated. It still vibrates when I touch the center of the screen even tough there is a different color. I am guessing that the program fixes the screen co-ordinates and are not using the image co-ordinates. I tried using event.getX, getRawX. But both are referencing to screen co-ordinate only.
My question is
*is there a way to target the image co-ordinate rather than screen co-ordinate?
*If not, how else can I accomplish this task?
Well, it's kind of semantic, but there is no concept of "image co-ordinates".
If you think about, the touch is handled in the first instance by a piece of hardware which has absolutely no knowledge of what your are touching except its' physical pixels and this is what it reports to Android.
In turn, Android has no knowledge of what that chunk of image pixels is. The position of a particular image in your pixel relative to the screen only has meaning inside your app. Since the touch event originates outside your app, there is no way to associate the two....
....unless you make the association. So, you moved the image in your code in response to a touch event. Remember how far you moved it, using a variable defined in the class handling the touch event, and then use that as an offset to the x and y given to you in subsequent touch events.
E.g. You pan the image 200 pixels left. A dead centre touch now corresponds to the centre pixel of your image (x/2) + 200 since the physical pixel touched is now 200 to the right of the image centre.
[EDIT] Having thought a little more about this, you might be using a matrix to pan the image, and if you aren't, then do check out the Matrix class. If you are, then you can query the matrix at any time to get the total amount of pan in x and y at any time. If you are also doing scaling and/or rotation, things get a bit more complex but that would merit a new question.
[EDIT]
To get the colour at your x,y:
int pixel = bitmap.getPixel(x,y);
int redValue = Color.red(pixel);
int blueValue = Color.blue(pixel);
int greenValue = Color.green(pixel);
Related
however, i have a weird issue, when drawing, it seems the outside 1px of an image is stretched to fit a rectangle, but the inside is only stetched to an extend, i was drawing to 48x48 tiles, but drew a 500x500 tile to show the issue. [ 500x500 draws fine ]
the worst part seems to be, it chooses when to stretch and not to stretch. and also what to strech. im sorry this is hard to explain but i have attached a image that i hope does a better job.
it could just be misunderstanding how to use a draw with spritebatch
edit: Tile is 48x48 not 64x64, ive just been working all day.
This is because you are not rendering "pixel perfect" which means your image does not line up with the pixel grid of your monitor. A quick fix might be to set a linear filter for your textures, since by default it uses nearest and thus a pixel on the screen will inherit the closest color it can get. A linear filter will interpolate colors and make that line "look" thinner.
texture.setFilter(Texture.TextureFilter.Linear, Texture.TextureFilter.Linear);
If you are using texturepacker you can do this in one go by altering it's settings.
texturePackerSetting.filterMin = Texture.TextureFilter.Linear;
texturePackerSetting.filterMag = Texture.TextureFilter.Linear;
Or you could edit the atlas file itself by by changing the filter parameter to:
filter: Linear,Linear
This obviously costs more power since it needs to do more calculations for each pixel you drawn to the screen but I would not worry about this until your drawing is starting to get a bottleneck.
Another solutions is to draw pixel perfect which means you need to set your viewport to the size of the device gdx.graphics.getWidth, gdx.graphics.getHeight, in other words a ScreenViewport and draw your textures at exact sizes you want them. Of course this means a screen with more pixels sees more of your game world then a screen with less pixels and the more pixels a device has the smaller your textures will look. Another drawback of this is that you have to forget about any zooming or draw sprites for each level of zoom so they line up with the pixel grid of the device again.
im trying do develop a Zelda like game. So far i am using bitmaps and everything runs smooth. At this point the camera of the hero is fixed, meaning, that he can be anywhere on the screen.
The problem with that is scaling. Supporting every device and keeping every in perfect sized rects doesnt seem to be that easy :D
To prevent that i need a moving camera. Than i can scale everything to be equally sized on every device. The hero would than be in the middle of the screen for the first step.
The working solution for that is
xCam += hero.moveX;
yCam += hero.moveY;
canvas.translate(xCam,yCam);
drawRoom();
canvas.restore();
drawHero();
I do it like this, because i dont wand to rearrange every tile in the game. I guess that could be too much processing on some devices. As i said, this works just fine. the hero is in the middle of the screen, and the whole room is moving.
But the problem is collision detection.
Here a quick example:
wall.rect.intersects(hero.rect);
Assuming the wall was originally on (0/0) and the hero is on (screenWitdh/2 / screenHeight/2) they should collide on some point.
The problem is, that the x and y of the wall.rect never change. They are (0/0) at any point of the canvas translation, so they can never collide.
I know, that I can work with canvas.getClipBounds() and then use the coordinates of the returned rect to change every tile, but as I mentioned above, I am trying to avoid that plus, the returned rect only works with int values, and not float.
Do you guys know any solution for that problem, or has anyone ever fixed something like this?
Looking forward to your answers!
You can separate your model logic and view logic. Suppose your development dimension for the window is WxH. In this case if your sprite in the model is 100x100 and placed at 0,0, it will cover area from 0,0 to 100, 100. Let's add next sprite (same 100x100 dimension) at 105,0 (basically slightly to the right of the first one), which covers area from 105,0 to 205,100. It is obvious that in the model they are not colliding. Now, as for view if your target device happens to be WxH you just draw the model as it is. If your device has a screen with w = 2*W, h = 2*H, so twice as big in each direction. You just multiply the x and y by w / W and h / H respectively. Therefore we get 2x for x and y, which on screen becomes 1st object - from 0,0 to 200, 200, 2nd object - from 210,0 to 410, 200. As can be seen they are still not colliding. To sum up, separate your game logic from your drawing (rendering) logic.
I think you should have variables holding the player's position on the "map". So you can use this to determine the collision with the non changing wall. It should look something like (depensing on the rest of your code):
canvas.translate(-hero.rect.centerX(), -.rect.centerY());
drawRoom();
canvas.restore();
drawHero();
Generally you should do the calculations in map coordinates, not on screen. For rendering just use the (negative) player position for translation.
I've been looking around and i couldn't find an answer to this but what I have done is create a cube / box and the camera will squash and stretch depending on where I am looking at. This all seems to resolve it self when the screen is perfectly square but when I'm using 16:9 it stretches and squashes the shapes. Is it possible to change this?
16:9
and this is 500px X 500px
As a side question would it be possible to change the color of background "sky"?
OpenGL uses a cube [-1,1]^3 to represent the frustum in normalized device coordinates. The Viewport transform strechtes this in x and y direction to [0,width] and [0,height]. So to get the correct output aspect ratio, you have to take the viewport dimensions into account when transfroming the vertices into clip space. Usually, this is part of the projection matrix. The old fixed-function gluPerspective() function has a parameter to directly create a frustum for a given aspect ratio. As you do not show any code, it is hard to suggest what you actually should change, but it should be quite easy, as it boils down to a simple scale operation along x and y.
To the side question: That color is defined by the values the color buffer is set to when clearing the it. You can set the color via glClearColor().
A portion of my app involves the user drawing images that will be later strung together in a PDF. The user is free to use the entire screen to draw. Once the user is done drawing, I'd like to trim off all of the white space before adding the image to a PDF. This is where I am having problems. I've thought of two different methods to determine the location of trimmable white space and both seem clumsy.
My first thought was having the motion event of the stylus record if the event has gone outside of the box so far. If it has, I would expand the box to accommodate this. Unfortunately I could see polling every time there is a motion event being bad for performance. I can't just look at up and down events because the user could draw something like the letter V.
Then I thought I could look at all the pixels (using getPixel()) and see where the highest, lowest, rightmost and leftmost black pixels are. Again this seems like a really inefficient way to find the box. I'm sure I could skip some pixels to improve performance, but I can't skip too many.
Is there a standard way of doing what I want to do? I haven't been able to find anything.
You can inside your editor, where you record that this pixel has been drawn upon, update the maximum and minimum X and Y, and then use them later to crop the image.
If the user is drawing, aren't you already handling the onTouchEvent callback in order to capture the drawing events? If so, it shouldn't be a big deal to keep a minX, maxX, minY and maxY and check each recorded drawing event against these values.
i'm having the problem of capturing all the coordinate value of pixels while dragging with the mouse using mousedragged event in java
while i'm dragging slowly i'm able to get all the coordinate value of pixels
but when i'm doing it fast i'm getting only one third of the pixel coordinate values
for example if i drag it slowly i'm getting 760 pixel values but when i'm doing it fast i'm getting only 60 pixel coordinate values
please help me
I need all the points because i'm going to use all those points for the signature comparision...
Project Description :
User will put the sign using mouse in log in page, this sign will be compared with the sign which the user already put in sign up page...
I'm going to compare the sign using the pixel values, so by getting all the coordinate values only i can compare the sign...
pls help me...
Windows is not going to give you this, its up to the refresh rate of the Mouse, its DPI and the rate at which windows polls for the Mouse event. You are not going to get all pixels so you will need to make room for some ambiguity.
(It doesn't matter which language you use Java or C#)
Mouse movement events occur every few milliseconds, not for every pixel movement, so when the mouse is moving rapidly, some pixels will be missed out. If you want every single pixel, you'll have to interpolate between pixels if the new location is not adjacent to the previous one. One way to interpolate the pixels between two coordinates is Bresenham's line algorithm: http://en.wikipedia.org/wiki/Bresenhams_line_algorithm
Edit: Fixed link.