I'm trying to write a in-app joystick axis calibration tool.
The joystick axis area should be a rectangle, but in the reality it's a non-linear closed curve and I want to increase the accuracy.
The calibration should work this way:
we have a measured value, and this way we get the correct value:
Correct value = [(measured value)/range] * wantedrange
where range is the difference between the maximum and minimum value measured for that axis.
But there is also an offset to move the center point to the right position, how to calculate it?
EDIT: I also made an image: green rectangle is the expected area, red shape is the "real" inaccurate measured area, finally blue is the wanted calibrated area that I shift to (0,0) so that I can use the ratio to convert coordinates to the bigger green rectangle.
EDIT2:
This image explains how calibration can be even more accurate, thanks to zapl answer:
If we find the blue rectangle center, we can divide the rectangle in 4 rectangles and calculate a ratio between that range and the green rectangle's range.
And the code should be something like this:
if(value<axiscenter) correctedvalue = ((value-axismin)/(axiscenter-axismin)) * wantedaxisrange;
else correctedvalue = wantedaxisrange + ((value-offset-axiscenter)/(axismax-axiscenter-axismin)) * wantedaxisrange;
You can get the position of the blue rectangle by instructing the user to move the joystick along the edges so that the values you see are the red curve. You should also instruct user to leave joystick in centered position since you usually need to know the center. Calcuated center is not always the real center position.
For each axis separate those values by the side of the center they are on and find those that are the closest to the center point. That would work with calculated center. Now you have the blue rectangle.
E.g. on X axis you see values ranging from 0-20 and 80-100, center is ~50 > blue rectangle is 20 - 80.
Assuming you want to calibrate it so that values are 0-100 (green) you calculate correction for the x axis as
calibratedX = (uncalibrated - 20) * 100 / 60
Values are shifted by 20 to the right (-20 to normalize them to 0-60) and their range is 60 (80 - 20) which you want to upscale to 0-100. After that clip values to 0-100 since they will be outside for every point on the red line that was outside the blue rectangle.
Result looks like
where pink are the values after transformation, and the pink area outside the green rectangle is cut away.
Regarding the center point: just run it through those calculations as well.
Related
So I am building a application to solve mazes one of the options is upload a picture and it will solve it. However upon solving the maze the output will look like this.
I would like to figure out how to make my program find the proper corridor size and have the solution look like this with the pathway completely full
My data is put into a array with 1's representing the walls and 0's the spaces like this. So far I have thought about trying to find the smallest distance between 1's but that runs into problems with circular mazes and writing on the maze. I have thought about filling the distance between the walls but that runs into problems at intersections.
I am drawing on the image using
image.setRGB(x, y, Color.RED.getRGB());
with the image being a BufferedImage.
I am truly all out of ideas and don't know how to come at this problem any help would be appreciated.
Each square in your grid has a certain size. Say wsq * hsq for "width of square times height of square".
Given your much more fine-grained (x, y), you can find in which square it is by dividing x by wsq and y by wsh:
int xsq = x / wsq;
int ysq = y / ysq;
The area to paint red would be from (xsq * wsq, ysq * hsq) and have width/height (wsq, hsq). and you could paint that red, but it would mean that you paint over the walls. So you have to adjust the area you're going to fill with red color by the size of the walls. If the walls are all two pixels thick, you need to add 1 to the x and the y coordinate of the square, and substract 2 from the widht and the height.
And you could fill it again (with a Graphics2D) for every time that you are now calling image.setRGB or you could remember which squares that you already filled.
Note: since you are working with regular-sized squares, you can also optimize your maze-solving algorithm to work in a grid of squares of size (wsq, hsq) rather than the individual pixels in the image.
I'm making a coastline fractal on a window that is one by one wide, and I would like to make the very first one pictured below, however, I cannot figure out which x and y coordinates to use to make the angles form 90 degrees and still fit on the screen, I don't need any code, I just would like how to figure out which x and y coordinates to use. Thanks!
Points:
1st point: (0,0.5)
2nd point: (0.25,0.75)
3rd point: (0.75,0)
4th point: (1,0.5)
My work (although messy and illegible at times):
It looks like from the picture that the first and last point both have a y-value of 0.5. Since the viewing window is one, you divide it into 4 parts each of which is 0.25 in length. The triangles that are formed if you draw a horizontal line at y=0.5 are isosceles according to the image. Thus, you solve: sin(45)=x/0.5.
re "x and y coordinates are doubles in between 0 and 1",
Then you will need to translate from your model (the set of points that make up your fractal) and the view (the GUI display). The model will go from 0 to 1, the view from 0 to the graphical window's width. A simple linear transformation where you multiply the model by some scale factor will serve.
Seems like you're wanting to map an abstract coordinate system to your screen.
Let's say your endpoints (in arbitrary coordinates) are (0, 0) and (1, 0). Then your points for the leftmost figure, in this system, will be (0, 0), (1/4, sqrt(2)/4), (1/2, 0), (3/4, -sqrt(2)/4), and (1, 0).
The other diagrams are calculated by some method. It sounded like your question was focusing on fitting it to the screen, so I'll continue with that. The method for fitting it to the screen is the same.
From there, you have a screen coordinate system. Each point is transformed to the screen. Let's say you have a 1000 by 1000 screen, with screen coordinates (0, 0) in the upper left. If you want to take up the entire screen, then you'd do the following:
Flip the y coordinates (+y is down on your screen)
Determine the full range in x and y for your arbitrary coordinates (1 for x, sqrt(2)/2 for y)
Multiply x values by 1000, and y values by 2000 / sqrt(2) to expand to the screen.
Subtract 500 from y values to center the image in the y direction.
I'm creating some some shapes and everything seems to be blurred, like anti-aliased despite no effects applied.
For example, a white line drawn on a black backgroud with 1 pixel width, is rendered grey! Changing the width to 2px results in white, but not well-defined.
When searching, the method setSmooth(false) on shapes returned, but calling it makes no difference.
What should I change or disable on Stage or Scene?
See the Shape documentation:
Most nodes tend to have only integer translations applied to them and
quite often they are defined using integer coordinates as well. For
this common case, fills of shapes with straight line edges tend to be
crisp since they line up with the cracks between pixels that fall on
integer device coordinates and thus tend to naturally cover entire
pixels.
On the other hand, stroking those same shapes can often lead to fuzzy
outlines because the default stroking attributes specify both that the
default stroke width is 1.0 coordinates which often maps to exactly 1
device pixel and also that the stroke should straddle the border of
the shape, falling half on either side of the border. Since the
borders in many common shapes tend to fall directly on integer
coordinates and those integer coordinates often map precisely to
integer device locations, the borders tend to result in 50% coverage
over the pixel rows and columns on either side of the border of the
shape rather than 100% coverage on one or the other. Thus, fills may
typically be crisp, but strokes are often fuzzy.
Two common solutions to avoid these fuzzy outlines are to use wider
strokes that cover more pixels completely - typically a stroke width
of 2.0 will achieve this if there are no scale transforms in effect -
or to specify either the StrokeType.INSIDE or StrokeType.OUTSIDE
stroke styles - which will bias the default single unit stroke onto
one of the full pixel rows or columns just inside or outside the
border of the shape.
And see also the documentation of Node:
At the device pixel level, integer coordinates map onto the corners
and cracks between the pixels and the centers of the pixels appear at
the midpoints between integer pixel locations. Because all coordinate
values are specified with floating point numbers, coordinates can
precisely point to these corners (when the floating point values have
exact integer values) or to any location on the pixel. For example, a
coordinate of (0.5, 0.5) would point to the center of the upper left
pixel on the Stage. Similarly, a rectangle at (0, 0) with dimensions
of 10 by 10 would span from the upper left corner of the upper left
pixel on the Stage to the lower right corner of the 10th pixel on the
10th scanline. The pixel center of the last pixel inside that
rectangle would be at the coordinates (9.5, 9.5).
So your options for clean lines when you have an odd stroke width are:
Use a StrokeType.INSIDE or StrokeType.OUTSIDE stroke style.
Offset the co-ordinates of shapes by 0.5 of a pixel so that the strokes line up on the lines rather than the cracks between lines.
Just use the next even number up as the stroke width, e.g. 1 => 2, 3 => 4, etc.
As to why setSmooth(false) does not work, I don't know exactly, my guess is that the antialiasing it refers to is independent of the antialiasing styles performed when strokes are centered on the cracks between pixels, but I would not know why that would be.
So I've got an assignment that takes two inputs, males and females, and outputs matingPairs, the product of the two.
In addition to that, the instructions ask to draw a shape using one of those variables.
I've decided to draw circles for each value.
I first draw matingPairs, followed by the smaller male and female circles on top of the original, larger matingPairs circle.
The problem I'm running in to is obviously representing the graphic in the applet. If the numbers go higher than say 100, the graphic becomes too large for the applet.
I'm looking for a way to basically have the matingPairs circle always fill the applet, then have males and females dynamically adjust so their size is scaled relative to the matingPairs circle size. I'm using JApplet.
Thank you very much for any guidance. I'm really looking for a solution, rather a push in the right direction.
May be you should provide more instruction about how are you drawing the circles in the Graphics object.
The idea is to manage two bi-dimensional spaces with different scales; the first one is the input data and the second one represents the available area to draw such data. The first one can have data on any location, such (5, 5), (0.2, 0.3)or (1200, 3400). The key is to map the original coordinates of the first space into the second, using the proper transformation: scale + translation.
This transformation must be calculated prior to start drawing and applies to any point drawn.
The idea is to map the rectangle where input data resides to the available area in the graphics. If the graphics area is 200x200 pixels and the data could be from (0, 0) to (400, 400), just divide by 2 the coordinates of the points to draw. If the original data is not centered in (0, 0), use a translation.
So, do you need to know how to get the size of the applets canvas or how to scale the male/female circles accordingly?
Edit:
Drawing a circle to fill the 600x600 area should be easy. Just keep in mind that you often specify the top left corner of the circle and the width and height (i.e. the diameter) when calling drawOval() / fillOval() or similar methods.
The next question is: what does represent the size of the input (males/females) and output (pairs), the area or the radius of the circles? Whatever it is, it should be easy to calculate the input/output ratio and then multiply the fixed size of the output circle with it in order to get the size of the input circle.
I am currently trying to show a series of images that slightly differ from each other in a 3D view, and which contain lots of transparent areas (for example, points that move in time inside a rectangle, and I would provide a 3D view with all their positions over time).
What I'm doing now is generate an image with the points drawn in it, create one Boxes of 40x40x1 per frame (or rectangular shape of 40x40), apply the image as a texture to the FRONT side of the box, and add the boxes to my scenes at positions (0, 0, z) where z is the frame number.
It works quite well, but of course their is discontinuities (of 1 "meter") between the images.
I would like to know if their is a way to create an "extrusion" object based on that image so as to fill the space between the planes. This would be equivalent of creating one 1x1x1 box for each point, placing them at (x, y, z) where x/y are the point's coordinate and z the frame number. The actual problem is that I have lots of points (several hundreds, if not thousands in some cases), and what was relatively easy to handle and render with an image would, I think, become quite heavy to render if I have to create thousands boxes.
Thanks in advance for your help,
Frederic.
You could use 3d textue with your data (40 x 40 x N) pixels, N=number of frames.
But you still has to draw something with this texture enabled.
I would do what you are doing currently - draw quads, but no only along Z axis, but along X and Y too.
Each of N quads along Z axis would have 40x40 size, each of 40 quads along X axis would be 40xN size, and each of 40 quads along Y axis would be Nx40 size.
So for 2x2x2 textue we will draw 2+2+2 = 6 quads, and it will look like regular cube, for 3x3x3 points in texture we will draw 3+3+3 quads, and it will look like 8 cubes stacked into one big cube (so instead of 8 cubes 6 quads each we just draw 9 quads, but the effect is the same).
For 40x40x1000 it would be 1080 quads (reasonable to draw in real time imho) instead of 40*40*1000*6 quads.
I only don't know, if the graphical effect would be exactly what you wanted to achieve.