i want to resize my images (orignal size is 1080p) but they dont resize properly and i dont know why. The Images just dont have the right size sometimes. On my emulator and my old 800*480 smartphone it works fine but on my nexus 4 with 1280*768 things dont look right. There is no problem reading the right screen resolution. There is just a bug with my resize procedure. Please help me.
private float smaller;
smaller = height/1080; //height is screenheight; in my case its 768 because of landscape
object.bitmap = Bitmap.createScaledBitmap(bitmap,(int)(smaller*bitmap.getWidth()) ,(int)(smaller*bitmap.getHeight()), true);
In the end the height is not resized to 768/1080*bitmapheight and i dont know why.
edit:
these are screenshots of my programm showing the images have not the same height
First image:
imgur.com/STSgAOd,Wh3fVdX
Second:
imgur.com/STSgAOd,Wh3fVdX#1
As you can see the Images are not equal in terms of height. On my emulator and my old smartphone they look right. The Images should not touch the bottom but on my nexus 4 they do.
also tryed double:
private double factor;
factor = ((double)screenheight/(double)1080);
objekte.bitmap1 = Bitmap.createScaledBitmap(bitmap,(int)(factor*bitmap.getWidth()) ,(int)(factor*bitmap.getHeight()), true);
same bad result
You asume the height needs to resize the most(look at your height/1080). I might be that the width has to resize the most. I use this to scale them:
//Calculate what scale is needed
double xFactor = (double)image.Width/(double)ScreenWidth;
double yFactor = (double)image.Height/(double)ScreenHeight;
double factor = xFactor;
if(yFactor>xFactor){
factor = yFactor;
}
int imageWidth = Convert.ToInt32(im.Width / factor);
int imageHeight = Convert.ToInt32(im.Height / factor);
Note: this is written in C#. It needs some changes to work.
Note2: this makes sure the image will be full screen.(as much as possible, because it is scalled)
It's caused by integer division, you can see that with -
public static void main(String[] args) {
int height = 768;
float smaller = (float) height / 1080; // <-- force float
float test = height / 1080; // <-- integer division,
// and assign the int result to the float.
System.out.println("test: " + test);
System.out.println("smaller: " + smaller);
}
Output is
test: 0.0
smaller: 0.7111111
Related
I am trying to center the content of my page after scaling it by a factor X. I have tried using the Matrix.translate function but I always end up getting the wrong position, except when scaling with a factor of 0.5 (which makes totally sense to me).
My current code:
for (int i = 0; i < doc.getNumberOfPages(); i++) {
pdfBuilder.addPage(doc.getPage(i));
PDPage p = pdfBuilder.getDocument().getPage(i);
Matrix matrix = new Matrix();
float scaleFactor = 0.7f;
float pageHeight = p.getMediaBox().getHeight();
float pageWidth = p.getMediaBox().getWidth();
float translateX = pageWidth * (1 - scaleFactor);
float translateY = pageHeight * (1 - scaleFactor);
matrix.scale(scaleFactor, scaleFactor);
matrix.translate(translateX, translateY);
PDPageContentStream str = new PDPageContentStream(pdfBuilder.getDocument(), p, AppendMode.PREPEND,
false);
str.beginText();
str.transform(matrix);
str.endText();
str.close();
}
I have also tried other boxes like the cropBox and bBox but I think I am totally wrong in what I do right now. Please help me! :)
Update
I finally found a solution. The new translation values I am using now look like the following.
float translateX = (pageWidth * (1- scaleFactor)) / scaleFactor / 2;
float translateY = (pageHeight * (1- scaleFactor)) / scaleFactor / 2;
Update I finally found a solution. The new translation values I am using now look like the following.
float translateX = (pageWidth * (1- scaleFactor)) / scaleFactor / 2;
float translateY = (pageHeight * (1- scaleFactor)) / scaleFactor / 2;
First of all, it is important to note what #mkl said.
The crop box may be the box you should use instead of the media box.
The code implicitly assumes that the lower left corner of the (media/crop) box is the origin of the coordinate system. This often is the case but not always.
The code only scales the static content, not annotations.
Now the explanation of the translation (e.G. translation for the page height). PLEASE NOTE THAT I AM NOT A MATHEMATICIAN AND I JUST TRIED DIFFERENT WAYS AND THIS IS THE ONE THAT WORKED FOR ME
Firsly, we multiply the page height with the opposite of the scale factor pageHeight * (1 - scaleFactor). We need the opposite because the smaller we scale something the more it needs to move from a given position. If we use the normal scale factor here, the smaller we scale an image the less it will translate to the centre.
Now the problem is that the translation is still off. Overall it moves the scaled content in the right direction, but just not into the centre. Therefore I tried dividing the calculated factor in the step before through the half of the scale factor. We use the half here because we want the content to appear in the centre. I don't know exactly why it is precisely this value, but as I said it just worked for me!
If you know why this works, feel free to edit this answer :)
My app, which uses ZXing to scan QR codes, can't read a QR Code unless the phone is VERY far away from the code (see picture, 6-7+ inches away and still not reading). The code is centered and well within the framingRect, but the camera seems to only be picking up result points from the top 2 positioning squares. I have increased the size of the framing rectangle through some code which I found here, which does yield a much better result.
Code: (replaces getFramingRect from zxing.camera.cameramanager.Java)
public Rect getFramingRect() {
if (framingRect == null) {
if (camera == null) {
return null;
}
Point screenResolution = configManager.getScreenResolution();
int width = screenResolution.x * 3 / 4;
int height = screenResolution.y * 3 / 4;
Log.v("Framing rect is : ", "width is "+width+" and height is "+height);
int leftOffset = (screenResolution.x - width) / 2;
int topOffset = (screenResolution.y - height) / 2;
framingRect = new Rect(leftOffset, topOffset, leftOffset + width, topOffset + height);
Log.d(TAG, "Calculated framing rect: " + framingRect);
}
return framingRect;
}
For reasons beyond my comprehension, with this new larger framing rectangle, codes can be read as soon as they fit inside the rect width, whereas previously the code had to occupy a small region at the center of the rect (see pic).
My Question:
How can I make code scan as soon as it is within the bounds of the framing rect, without increasing the size of the rectangle? Why Is this happening?
Increase the width and height to 4/4 (just leave them as the screen resolution) and then change the framing rect visual representation to make it seem as if the scanner is only inside that. Worked for my app.
I have tried to make an algorithm in java to rotate a 2-d pixel array(Not restricted to 90 degrees), the only problem i have with this is: the end result leaves me with dots/holes within the image.
Here is the code :
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
int xp = (int) (nx + Math.cos(rotation) * (x - width / 2) + Math
.cos(rotation + Math.PI / 2) * (y - height / 2));
int yp = (int) (ny + Math.sin(rotation) * (x - width / 2) + Math
.sin(rotation + Math.PI / 2) * (y - height / 2));
int pixel = pixels[x + y * width];
Main.pixels[xp + yp * Main.WIDTH] = pixel;
}
}
'Main.pixels' is an array connected to a canvas display, this is what is displayed onto the monitor.
'pixels' and the function itself, is within a sprite class. The sprite class grabs the pixels from a '.png' image at initialization of the program.
I've tried looking at the 'Rotation Matrix' solutions. But they are too complicated for me. I have noticed that when the image gets closer to a point of 45 degrees, the image is some-what stretched ? What is going wrong? And what is the correct code; that adds the pixels to a larger scale array(E.g. Main.pixels[]).
Needs to be java! and relative to the code format above. I am not looking for complex examples, simply because i will not understand(As said above). Simple and straight to the point, is what i am looking for.
How id like the question to be answered.
Your formula is wrong because ....
Do this and the effect will be...
Simplify this...
Id recommend...
Im sorry if im asking to much, but i have looked for an answer relative to this question, that i can understand and use. But to always either be given a rotation of 90 degrees, or an example from another programming language.
You are pushing the pixels forward, and not every pixel is hit by the discretized rotation map. You can get rid of the gaps by calculating the source of each pixel instead.
Instead of
for each pixel p in the source
pixel q = rotate(p, theta)
q.setColor(p.getColor())
try
for each pixel q in the image
pixel p = rotate(q, -theta)
q.setColor(p.getColor())
This will still have visual artifacts. You can improve on this by interpolating instead of rounding the coordinates of the source pixel p to integer values.
Edit: Your rotation formulas looked odd, but they appear ok after using trig identities like cos(r+pi/2) = -sin(r) and sin(r+pi/2)=cos(r). They should not be the cause of any stretching.
To avoid holes you can:
compute the source coordinate from destination
(just reverse the computation to your current state) it is the same as Douglas Zare answer
use bilinear or better filtering
use less then single pixel step
usually 0.75 pixel is enough for covering the holes but you need to use floats instead of ints which sometimes is not possible (due to performance and or missing implementation or other reasons)
Distortion
if your image get distorted then you do not have aspect ratio correctly applied so x-pixel size is different then y-pixel size. You need to add scale to one axis so it matches the device/transforms applied. Here few hints:
Is the source image and destination image separate (not in place)? so Main.pixels and pixels are not the same thing... otherwise you are overwriting some pixels before their usage which could be another cause of distortion.
Just have realized you have cos,cos and sin,sin in rotation formula which is non standard and may be you got the angle delta wrongly signed somewhere so
Just to be sure here an example of the bullet #1. (reverse) with standard rotation formula (C++):
float c=Math.cos(-rotation);
float s=Math.sin(-rotation);
int x0=Main.width/2;
int y0=Main.height/2;
int x1= width/2;
int y1= height/2;
for (int a=0,y=0; y < Main.height; y++)
for (int x=0; x < Main.width; x++,a++)
{
// coordinate inside dst image rotation center biased
int xp=x-x0;
int yp=y-y0;
// rotate inverse
int xx=int(float(float(xp)*c-float(yp)*s));
int yy=int(float(float(xp)*s+float(yp)*c));
// coordinate inside src image
xp=xx+x1;
yp=yy+y1;
if ((xp>=0)&&(xp<width)&&(yp>=0)&&(yp<height))
Main.pixels[a]=pixels[xp + yp*width]; // copy pixel
else Main.pixels[a]=0; // out of src range pixel is black
}
I have gone through a lot of answers in this site but unfortunately nothing solves the problem I am about to describe.
I have designed an app with 800x520 in landscape mode. All assets are designed using that predefined window size. I used unit system but I think that's irrelevant here.
Now, I have a sprite/image that is 85x70 in size. I want it to scale but maintaining its original aspect ratio no matter what the window/screen size of the real device is. I have the following so far, which keeps the ratio but doesn't resize properly meaning if the device screen is bigger than my predefined window size by both width and height, it shows the sprite still small. If there's no change in height but only in width or no change in width but only in height, then that's fine. The image should not be scaled on x or y. But this snippet below doesn't do the job.
float screenW = Screen.SCREEN_WIDTH; // Predefined width 800
float screenH = Screen.SCREEN_HEIGHT; // Predefined height 520
float deviceW = Gdx.graphics.getWidth(); // Actual device width that can vary
float deviceH = Gdx.graphics.getHeight(); // Actual device height that can vary
float changeX = screenW / deviceW; // Also tried deviceW / screenW
float changeY = screenH / deviceH; // Also tried deviceH / screenH
// I tried applying the changeX and changeY above
// in sprite's scale.x and scale.y respectively but no luck
// So I tried the below to get the new size of the sprite
// with and without applying scale above..
// Tried a lot of ways but no luck
float newWidth = (Sprite.WIDTH * changeX);
float newHeight = (Sprite.HEIGHT * changeY);
I do not need any actual code as long as I have a correct algo, I'd appreciate it.
The calculation for your changeX and changeY seems backwards. If the new device has a width of 1600 for example, you would want to scale by a factor of 2. Your changeX would be 1/2.
Try something like:
float changeX = deviceW / screenW;
float changeY = deviceH / screenH;
once you have calculated these you will need to scale by the lesser of the 2 to preserve the aspect ratio.
float scale = Math.min(changeX, changeY);
Then you can calculate the new value by multiplying the original sprite values by the scale
I have an application in JavaSE and i want my application always starts at center of screen. If two monitors are plugged, the right one should be used. So i wrote a code like this:
GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
if(ge.getScreenDevices().length == 2) {
int w_1 = ge.getScreenDevices()[0].getDisplayMode().getWidth();
int h_1 = ge.getScreenDevices()[0].getDisplayMode().getHeight();
int w_2 = ge.getScreenDevices()[1].getDisplayMode().getWidth();
int h_2 = ge.getScreenDevices()[1].getDisplayMode().getHeight();
int x = w_1 + w_2 / 2 - getWidth() / 2;
int y = h_2 / 2 - getHeight() / 2;
setLocation(x, y);
}
Unfortunately if monitor is rotated at 90°, width and height should be flipped. Is there any way to detect such rotation?
You don't need to know whether the second monitor is in portrait mode. Just find the bounds of the screen in device coordinates and use the center. (If it is in portrait mode, then height>width, but that isn't an important piece of information.)
Your formula to determine the center point of the second device is wrong. You are assuming that the coordinates of the second screen runs from (w_1,0) to (w_1 + w_2, h_2), but that isn't necessarily true. You need to find the GraphicsConfiguration object of the second screen and call GraphicsConfiguration.getBounds() on it. You can then calculate the center point of that rectangle.
If you want to know which device is on the left or the right (or top or bottom), you can compare the x (or y) values of their bounding rectangles. Note that the x or y values may be negative.
You should consider if the height is bigger than the width(portrait). I haven't heard anyone using portrait monitors yet, though.
Here is the code that works fine in most cases (from answer of Enwired).
GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
if(ge.getScreenDevices().length == 2) {
int x = (int)ge.getScreenDevices()[1].getDefaultConfiguration().getBounds().getCenterX() - frame.getWidth() / 2;
int y = (int)ge.getScreenDevices()[1].getDefaultConfiguration().getBounds().getCenterY() - frame.getHeight() / 2;
setLocation(x, y);
}
The only problem is that device index is not always 0 - left, 1 - right.