Mouse Dragging problem - java

i'm having the problem of capturing all the coordinate value of pixels while dragging with the mouse using mousedragged event in java
while i'm dragging slowly i'm able to get all the coordinate value of pixels
but when i'm doing it fast i'm getting only one third of the pixel coordinate values
for example if i drag it slowly i'm getting 760 pixel values but when i'm doing it fast i'm getting only 60 pixel coordinate values
please help me
I need all the points because i'm going to use all those points for the signature comparision...
Project Description :
User will put the sign using mouse in log in page, this sign will be compared with the sign which the user already put in sign up page...
I'm going to compare the sign using the pixel values, so by getting all the coordinate values only i can compare the sign...
pls help me...

Windows is not going to give you this, its up to the refresh rate of the Mouse, its DPI and the rate at which windows polls for the Mouse event. You are not going to get all pixels so you will need to make room for some ambiguity.
(It doesn't matter which language you use Java or C#)

Mouse movement events occur every few milliseconds, not for every pixel movement, so when the mouse is moving rapidly, some pixels will be missed out. If you want every single pixel, you'll have to interpolate between pixels if the new location is not adjacent to the previous one. One way to interpolate the pixels between two coordinates is Bresenham's line algorithm: http://en.wikipedia.org/wiki/Bresenhams_line_algorithm
Edit: Fixed link.

Related

Java - Screen Coordinates to Image Coordinates

I am using a Java application to display an image on the screen. I also am using an eye-tracker device which records the absolute pixel X,Y locations where the person is looking on the screen.
However, what I need to do is convert these X,Y coordinates from the screen positions into the X,Y locations of the image. In other words, somehow I need to figure out that (just an example) 482, 458 translates to pixel 1,1 (the upper left pixel) of the image.
How can I determine the image's placement on the screen (not relative to anything)?
I saw a few posts about "getComponentLocation" and some other APIs, but in my experimentation with these, they seem to be giving coordinates relative to the window. I have also had problems with that because the 1,1 coordinate that they give is within the window, and there is actually a bar at the top of the window (that has the title and the close and minimize buttons) whose width I do not know, so I cannot easily translate.
Surely there must be a way to get the absolute pixel location on the screen of a component?
If we are talking about Swing/AWT application than class java.awt.Component has method getLocationOnScreen which seemed to do what you want
And yes as #RealSkeptic mentioned in comments to question:
SwingUtilities.html#convertPointFromScreen
will do all this work for you considering components hierarchy

Convert pixels of a picture on GPS coordinates

I'm doing a project on Android for measuring areas of land through photographs taken by a drone.
I have an aerial photograph that contains a GPS coordinate. For practical purposes I assume that coordinate represents the central pixel of the picture.
I need to move pixel by pixel in the picture to reach the corners and know what GPS coordinate represent the corners of the
I have no idea about how to achieve it. I have searched but can not find anything similar to my problem.
Thank You.
enter link description here
If you know the altitude at which the photo was taken and the camera maximum capture angle I believe you can determine (through trigonometry) the deviation of each pixel from the center, in meters, and then determine the GPS coordinate of it.
According to my knowledge,
Height of the drone also matter so first of all with the central coordinate you also need at what height drone take that picture.
Now you need to perform some experiment with reference picture between two known GPS coordinate of two points of picture. Change the height of the drone and plot the number of pixels between two coordinate wrt to the height of drone. Doing some curve fitting and get the function between two variable.
Using the above function you can calculate the "change in GPS coordinate per pixel" at the particular height and by using this parameter we can easily deduce the GPS of picture taken by drone at particular height.
I don't know whether the solution works or not. But this my idea you can use this and develop further.
Thanks

Draw curved custom object in LIBGDX?

I've recently been looking into LibGDX and seem to have hit a wall, seen in the picture, the blue dot represents the users finger, the map generation it self is where i seem to get stuck, does LibGDX provide a method of dynamically drawing curved objects? I could simply generate them myself as images but then the image is hugely stretched to the point of the gap for the finger can fit 3! But also would need to be 1000's of PX tall to accommodate the whole level design.
Is it such that i should be drawing hundreds of polygons close together to make a curved line?
On a side not i'll need a way of determining when the object has from bottom to top so i can generate another 'chunk' of map.
You don't need hundreds of polygons to make a curve like you drew. You could get away with 40 quads on the left, and 40 on the right, and it would look pretty smooth. Raise that to 100 on each side and it will look almost perfectly smooth, and no modern device is going to have any trouble running that at 60fps.
You could use the Mesh class to generate a procedural mesh for each side. You can make the mesh stay in one spot, locked to the camera, and modify it's vertices and UVs to make it look like you are panning down an infinitely long corridor. This will take a fair amount of math up front but should be smooth sailing once you have that down.
Basically, your level design could be based on some kind of equation that takes Y offset as an input. Or it could be a long array of offsets, and you could use a spline equation or linear equation to interpolate between them. The output would be the UV and X coordinates which can be used to update each of the vertices of your two meshes.
You can use the vertex shader to efficiently update the UV coordinates, using a constant offset uniform parameter that you update each frame. That way you don't have to move UV data to the GPU every frame.
For the vertex positions, use your Mesh's underlying float[] and call setVertices() each frame to update it. Info here.
Actually, it might look better if you leave the UV's and the X positions alone, and just scroll the Y positions up. Keep a couple quads of padding off top and bottom of screen, and just move the top quad to the bottom after it scrolls off screen.
How about creating a set of curved forms that can be put together variably. Like the gap in the middle will at the top and bottom of each image be in the middle (with the same curvature at end and beginning points)...
And inbetween the start and end points you can go crazy on the shape.
And finally, you can randomly put those images together and get an endless world.
If you don't want to stop in the middle each time, you could also have like three entry and exit points (left, middle, right)... and after an image that ends left, you of course need to add an image that starts left, but might end somewhere else...

Cropping out whitespace from a user drawn image

A portion of my app involves the user drawing images that will be later strung together in a PDF. The user is free to use the entire screen to draw. Once the user is done drawing, I'd like to trim off all of the white space before adding the image to a PDF. This is where I am having problems. I've thought of two different methods to determine the location of trimmable white space and both seem clumsy.
My first thought was having the motion event of the stylus record if the event has gone outside of the box so far. If it has, I would expand the box to accommodate this. Unfortunately I could see polling every time there is a motion event being bad for performance. I can't just look at up and down events because the user could draw something like the letter V.
Then I thought I could look at all the pixels (using getPixel()) and see where the highest, lowest, rightmost and leftmost black pixels are. Again this seems like a really inefficient way to find the box. I'm sure I could skip some pixels to improve performance, but I can't skip too many.
Is there a standard way of doing what I want to do? I haven't been able to find anything.
You can inside your editor, where you record that this pixel has been drawn upon, update the maximum and minimum X and Y, and then use them later to crop the image.
If the user is drawing, aren't you already handling the onTouchEvent callback in order to capture the drawing events? If so, it shouldn't be a big deal to keep a minX, maxX, minY and maxY and check each recorded drawing event against these values.

Updating Image co-ordinate in android after panning

I am trying develop something from my research work using android. But I am not a coding person, so I am having hard time figuring out how to do things. I figured out ways to acheive my functionality but I am kinda struck with a issue which I could not resolve on my own. It would be great if you guys could help me with it.
I am trying to display a image that is bigger than the screen size and make it to play a sound or vibrate when I touch a particular colored pixel within the image. I was able to perform this for the first instance of the image(i.e., the image displayed once I start my application), but as soon as I pan it doesn't work. For example, my image has a green color pixel in the middle of the screen and after I pan it moved to the left. I am making it to vibrate once I touch the green pixel. The device vibrates when the green is i center, but after I pan it is not getting updated. It still vibrates when I touch the center of the screen even tough there is a different color. I am guessing that the program fixes the screen co-ordinates and are not using the image co-ordinates. I tried using event.getX, getRawX. But both are referencing to screen co-ordinate only.
My question is
*is there a way to target the image co-ordinate rather than screen co-ordinate?
*If not, how else can I accomplish this task?
Well, it's kind of semantic, but there is no concept of "image co-ordinates".
If you think about, the touch is handled in the first instance by a piece of hardware which has absolutely no knowledge of what your are touching except its' physical pixels and this is what it reports to Android.
In turn, Android has no knowledge of what that chunk of image pixels is. The position of a particular image in your pixel relative to the screen only has meaning inside your app. Since the touch event originates outside your app, there is no way to associate the two....
....unless you make the association. So, you moved the image in your code in response to a touch event. Remember how far you moved it, using a variable defined in the class handling the touch event, and then use that as an offset to the x and y given to you in subsequent touch events.
E.g. You pan the image 200 pixels left. A dead centre touch now corresponds to the centre pixel of your image (x/2) + 200 since the physical pixel touched is now 200 to the right of the image centre.
[EDIT] Having thought a little more about this, you might be using a matrix to pan the image, and if you aren't, then do check out the Matrix class. If you are, then you can query the matrix at any time to get the total amount of pan in x and y at any time. If you are also doing scaling and/or rotation, things get a bit more complex but that would merit a new question.
[EDIT]
To get the colour at your x,y:
int pixel = bitmap.getPixel(x,y);
int redValue = Color.red(pixel);
int blueValue = Color.blue(pixel);
int greenValue = Color.green(pixel);

Categories

Resources