I have an application in JavaSE and i want my application always starts at center of screen. If two monitors are plugged, the right one should be used. So i wrote a code like this:
GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
if(ge.getScreenDevices().length == 2) {
int w_1 = ge.getScreenDevices()[0].getDisplayMode().getWidth();
int h_1 = ge.getScreenDevices()[0].getDisplayMode().getHeight();
int w_2 = ge.getScreenDevices()[1].getDisplayMode().getWidth();
int h_2 = ge.getScreenDevices()[1].getDisplayMode().getHeight();
int x = w_1 + w_2 / 2 - getWidth() / 2;
int y = h_2 / 2 - getHeight() / 2;
setLocation(x, y);
}
Unfortunately if monitor is rotated at 90°, width and height should be flipped. Is there any way to detect such rotation?
You don't need to know whether the second monitor is in portrait mode. Just find the bounds of the screen in device coordinates and use the center. (If it is in portrait mode, then height>width, but that isn't an important piece of information.)
Your formula to determine the center point of the second device is wrong. You are assuming that the coordinates of the second screen runs from (w_1,0) to (w_1 + w_2, h_2), but that isn't necessarily true. You need to find the GraphicsConfiguration object of the second screen and call GraphicsConfiguration.getBounds() on it. You can then calculate the center point of that rectangle.
If you want to know which device is on the left or the right (or top or bottom), you can compare the x (or y) values of their bounding rectangles. Note that the x or y values may be negative.
You should consider if the height is bigger than the width(portrait). I haven't heard anyone using portrait monitors yet, though.
Here is the code that works fine in most cases (from answer of Enwired).
GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
if(ge.getScreenDevices().length == 2) {
int x = (int)ge.getScreenDevices()[1].getDefaultConfiguration().getBounds().getCenterX() - frame.getWidth() / 2;
int y = (int)ge.getScreenDevices()[1].getDefaultConfiguration().getBounds().getCenterY() - frame.getHeight() / 2;
setLocation(x, y);
}
The only problem is that device index is not always 0 - left, 1 - right.
Related
My gravity simulation acts more like a gravity slingshot. Once the two bodies pass over each other, they accelerate far more than they decelerate on the other side. It's not balanced. It won't oscillate around an attractor.
How do other gravity simulators get around it? example: http://www.testtubegames.com/gravity.html, if you create 2 bodies they will just oscillate back and forth, not drifting any further apart than their original distance even though they move through each other as in my example.
That's how it should be. But in my case, as soon as they get close they just shoot away from each other to the edges of the imaginary galaxy never to come back for a gazillion years.
edit: Here is a video of the bug https://imgur.com/PhhRhP7
Here is a minimal test case to run in processing.
//Constants:
float v;
int unit = 1; //1 pixel = 1 meter
float x;
float y;
float alx;
float aly;
float g = 6.67408 * pow(10, -11) * sq(unit); //g constant
float m1 = (1 * pow(10, 15)); // attractor mass
float m2 = 1; //object mass
void setup() {
size (200,200);
a = 0;
v = 0;
x = width/2; // object x
y = 0; // object y
alx = width/2; //attractor x
aly = height/2; //attractor y
}
void draw() {
background(0);
getAcc();
applyAcc();
fill(0,255,0);
ellipse(x, y, 10, 10); //object
fill(255,0,0);
ellipse(alx, aly, 10, 10); //attractor
}
void applyAcc() {
a = getAcc();
v += a * (1/frameRate); //add acceleration to velocity
y += v * (1/frameRate); //add velocity to Y
a = 0;
}
float getAcc() {
float a = 0;
float d = dist(x, y, alx, aly); //distance to attractor
float gravity = (g * m1 * m2)/sq(d); //gforce
a += gravity/m2;
if (y > aly){
a *= -1;}
return a;
}
Your distance doesn't include width of the object, so the objects effectively occupy the same space at the same time.
The way to "cap gravity" as suggested above is add a normal force when the outer edges touch, if it's a physical simulation.
You should get into the habit of debugging your code. Which line of code is behaving differently from what you expected?
For example, if I were you I would start by printing out the value of gravity every time you calculate it:
float gravity = (g * m1 * m2)/sq(d); //gforce
println(gravity);
You'll notice that your gravity value skyrockets as your circles get closer to each other. And this makes sense, because you're dividing by sq(d). Ad d gets smaller, your gravity increases.
You could simply cap your gravity value so it doesn't go off the charts anymore:
float gravity = (g * m1 * m2)/sq(d);
if(gravity > 100){
gravity = 100;
}
Alternatively you could cap d so it never goes below a certain value, but the result is the same.
In the end you'll find that this is not going to be as easy as you expected. You're going to have to tune the parameters quite a bit so your simulation works how you want.
Working demo here: https://beta.observablehq.com/#shaunlebron/1d-gravity
I followed the solution posted by the author of the sim that inspired this question here:
-First off, shrinking the timestep is always helpful. My simulation runs, as a baseline, about 40 ‘steps’ per frame, and 30 frames per second.
-To deal with the exact issue you talk about, I think modeling the bodies not as pure point masses - but rather spherical masses with a certain radius will be vital. That prevents the force of gravity from diverging to infinity. So, for instance, if you drop an asteroid into a star in my simulation (with collisions turned off), the force of gravity will increase as the asteroid gets closer, up until it reaches the surface of the star, at which point the force will begin to decrease. And the moment it’s at the center of the star (or nearby), the force will be zero (or nearly zero) - instead of near-infinite.
In my demo, I just completed turned off gravity when two objects are close enough together. Seems to work well enough.
My app, which uses ZXing to scan QR codes, can't read a QR Code unless the phone is VERY far away from the code (see picture, 6-7+ inches away and still not reading). The code is centered and well within the framingRect, but the camera seems to only be picking up result points from the top 2 positioning squares. I have increased the size of the framing rectangle through some code which I found here, which does yield a much better result.
Code: (replaces getFramingRect from zxing.camera.cameramanager.Java)
public Rect getFramingRect() {
if (framingRect == null) {
if (camera == null) {
return null;
}
Point screenResolution = configManager.getScreenResolution();
int width = screenResolution.x * 3 / 4;
int height = screenResolution.y * 3 / 4;
Log.v("Framing rect is : ", "width is "+width+" and height is "+height);
int leftOffset = (screenResolution.x - width) / 2;
int topOffset = (screenResolution.y - height) / 2;
framingRect = new Rect(leftOffset, topOffset, leftOffset + width, topOffset + height);
Log.d(TAG, "Calculated framing rect: " + framingRect);
}
return framingRect;
}
For reasons beyond my comprehension, with this new larger framing rectangle, codes can be read as soon as they fit inside the rect width, whereas previously the code had to occupy a small region at the center of the rect (see pic).
My Question:
How can I make code scan as soon as it is within the bounds of the framing rect, without increasing the size of the rectangle? Why Is this happening?
Increase the width and height to 4/4 (just leave them as the screen resolution) and then change the framing rect visual representation to make it seem as if the scanner is only inside that. Worked for my app.
I have tried to make an algorithm in java to rotate a 2-d pixel array(Not restricted to 90 degrees), the only problem i have with this is: the end result leaves me with dots/holes within the image.
Here is the code :
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
int xp = (int) (nx + Math.cos(rotation) * (x - width / 2) + Math
.cos(rotation + Math.PI / 2) * (y - height / 2));
int yp = (int) (ny + Math.sin(rotation) * (x - width / 2) + Math
.sin(rotation + Math.PI / 2) * (y - height / 2));
int pixel = pixels[x + y * width];
Main.pixels[xp + yp * Main.WIDTH] = pixel;
}
}
'Main.pixels' is an array connected to a canvas display, this is what is displayed onto the monitor.
'pixels' and the function itself, is within a sprite class. The sprite class grabs the pixels from a '.png' image at initialization of the program.
I've tried looking at the 'Rotation Matrix' solutions. But they are too complicated for me. I have noticed that when the image gets closer to a point of 45 degrees, the image is some-what stretched ? What is going wrong? And what is the correct code; that adds the pixels to a larger scale array(E.g. Main.pixels[]).
Needs to be java! and relative to the code format above. I am not looking for complex examples, simply because i will not understand(As said above). Simple and straight to the point, is what i am looking for.
How id like the question to be answered.
Your formula is wrong because ....
Do this and the effect will be...
Simplify this...
Id recommend...
Im sorry if im asking to much, but i have looked for an answer relative to this question, that i can understand and use. But to always either be given a rotation of 90 degrees, or an example from another programming language.
You are pushing the pixels forward, and not every pixel is hit by the discretized rotation map. You can get rid of the gaps by calculating the source of each pixel instead.
Instead of
for each pixel p in the source
pixel q = rotate(p, theta)
q.setColor(p.getColor())
try
for each pixel q in the image
pixel p = rotate(q, -theta)
q.setColor(p.getColor())
This will still have visual artifacts. You can improve on this by interpolating instead of rounding the coordinates of the source pixel p to integer values.
Edit: Your rotation formulas looked odd, but they appear ok after using trig identities like cos(r+pi/2) = -sin(r) and sin(r+pi/2)=cos(r). They should not be the cause of any stretching.
To avoid holes you can:
compute the source coordinate from destination
(just reverse the computation to your current state) it is the same as Douglas Zare answer
use bilinear or better filtering
use less then single pixel step
usually 0.75 pixel is enough for covering the holes but you need to use floats instead of ints which sometimes is not possible (due to performance and or missing implementation or other reasons)
Distortion
if your image get distorted then you do not have aspect ratio correctly applied so x-pixel size is different then y-pixel size. You need to add scale to one axis so it matches the device/transforms applied. Here few hints:
Is the source image and destination image separate (not in place)? so Main.pixels and pixels are not the same thing... otherwise you are overwriting some pixels before their usage which could be another cause of distortion.
Just have realized you have cos,cos and sin,sin in rotation formula which is non standard and may be you got the angle delta wrongly signed somewhere so
Just to be sure here an example of the bullet #1. (reverse) with standard rotation formula (C++):
float c=Math.cos(-rotation);
float s=Math.sin(-rotation);
int x0=Main.width/2;
int y0=Main.height/2;
int x1= width/2;
int y1= height/2;
for (int a=0,y=0; y < Main.height; y++)
for (int x=0; x < Main.width; x++,a++)
{
// coordinate inside dst image rotation center biased
int xp=x-x0;
int yp=y-y0;
// rotate inverse
int xx=int(float(float(xp)*c-float(yp)*s));
int yy=int(float(float(xp)*s+float(yp)*c));
// coordinate inside src image
xp=xx+x1;
yp=yy+y1;
if ((xp>=0)&&(xp<width)&&(yp>=0)&&(yp<height))
Main.pixels[a]=pixels[xp + yp*width]; // copy pixel
else Main.pixels[a]=0; // out of src range pixel is black
}
This is a bit more of a conceptual question. Lets say you have an array of squares, each 40 pixels by 40 pixels. Lets also say you clicked inside one of them. How would you get another object to appear in the center of the box that was clicked instead of where exactly the mouse was clicked?
Would you use an offset of some kind? I am really struggling to understand how to determine the center of the square in relation to the mouse clicked.
Assuming your could calculate the position of the box which was clicked (mousex / 40 for column and mousey / 40 for row), then you would simply need to calculate the center position for the object...
Normally something like...
int x = (parentWidth - childWidth) / 2;
int y = (parentWidth - childWidth) / 2;
would give you the center position for the child within the parent. You would then simply apply the offset of the box in question...
int x = xOffset + ((parentWidth - childWidth) / 2);
int y = yOffset + ((parentWidth - childWidth) / 2);
I believe this is more of a logic question than a java question, sorry.
My intent is rather straightforward, i want the ship to move and rotate with a matrix, with the bitmap ship1 being the center pivot of the rotation. The code works great except the pivot is off by a strange offset. (picture of conundrum linked at bottom)
The default value rotation at 0 works but all the other values seem to slide away from the center, with 180 being the furthest from the center.
centerX = playerValues[Matrix.MTRANS_X] + ship1.getWidth()/2;
centerY = playerValues[Matrix.MTRANS_Y] + ship1.getHeight()/2;
newRotation = ((float) Math.toDegrees(Math.atan2(fingery1 - centerY, fingerx1 - centerX)));
matrix.postRotate((newRotation - prevRotation), centerX, centerY);
prevRotation = newRotation;
if (fingerx1 > playerX) {
xspeed = 1;
} else
if (fingerx1 < playerX) {
xspeed = 0;
} else
if (fingery1 > playerY) {
yspeed = 1;
} else
if (fingery1 < playerY) {
yspeed = 0;
}
matrix.postTranslate(xspeed, yspeed);
matrix.getValues(playerValues);
I tried to draw how the relation of the bitmap looks at different angles. (the blue dot is where I intend to rotate the bitmap around, the arrow pointing right is the only correct one).
http://i.stack.imgur.com/2Yw76.png
Please let me know if you see any errors or any feedback helps! I just need a second pair of eyes on this because mine are going to explode soon.
Consider studying a good computer graphics text re matrix math. Foley and Van Dam is always a safe bet.
The matrix A is applied to point x with multiplication Ax. You have A = RT a rotation with translation post multiplied. The result is RTx which is R (T x) meaning the point is translated then rotated, when you probably meant the opposite.
Additionally it appears you are concatenating incremental changes repeatedly. Floating point errors will accumulate, visible as worsening distortions. Instead maintain orientation parameters x, y, theta for each ship. These are controlled by the UI. Set the matrix from these in each rendering. The transform will be rotation about the point (w/2, h/2) followed by translation to (x, y). But the matrix to effect this is the translation post multiplied by the rotation! Also you must reset the matrix for each ship.