Getting and setting RGB values on a BufferedImage - java

I'm having a really bad time dealing with RGB values in Java, which made me start trying small experiments with this.
I came down to this: loading an image, get it's rgb values and creating a new image with the same values. Unfortunately, this does not work (the images are displayed differently, see picture), as per the following code... Can some one see what's wrong?
BufferedImage oriImage=ImageIO.read(new URL("http://upload.wikimedia.org/wikipedia/en/2/24/Lenna.png"));
int[] oriImageAsIntArray = new int[oriImage.getWidth()*oriImage.getHeight()];
oriImage.getRGB(0, 0, oriImage.getWidth(),oriImage.getHeight(), oriImageAsIntArray, 0, 1);
BufferedImage bfImage= new BufferedImage(oriImage.getWidth(),oriImage.getHeight(),
BufferedImage.TYPE_INT_ARGB);
bfImage.setRGB(0,0,bfImage.getWidth(),bfImage.getHeight(),oriImageAsIntArray, 0, 1);

Apparently, getRGB and setRGB were not being used correctly.
I changed the code to:
oriImage.getRGB(0, 0, oriImage.getWidth(),oriImage.getHeight(), oriImageAsIntArray, 0, oriImage.getWidth());
(...)
bfImage.setRGB(0,0,bfImage.getWidth(),bfImage.getHeight(),oriImageAsIntArray, 0, bfImage.getWidth());
... and the picture displayed correctly. I still do not understand what this last argument is. In the JavaDoc, it is described as:
scansize - scanline stride for the rgbArray

Related

Finding 4 corner polygon around contours, OpenCV

I am trying to find a coordinates of a shape(s) in a white-black image. I am using findContours method for contour finding and approxPolyDP for optimizing them to a polygon. The shape in the input image (see below) is an processed text, I need to find a 4 corner polygon for each field, which would fit around this shape using less outside space. ApproxPolyDP function rarely gives me a 4 corners (despite changing parameters), which I need to use to apply perspective transform on an original image and skip the deskewing algorythm and to crop out the text. How can i find the best fitting 4 corner polygons for each field (not rectangles)? I could not find any proper tutorial on how to do that, is it really hard? Below I present my current code in java; desired result; input; current output. NOTE: I would highly appreciate if you could give me a method where HoughLines are not involved, this method is slow (for mobile phones; that's why I am asking this question), but if it is the only one possibility you know to get the result I need, please, post it, it would be appreciated.
Code for finding current shape(s):
Mat mask = new Mat(src.size(), CvType.CV_8UC3, new Scalar(0,0,0));
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(src, contours, new Mat(), Imgproc.RETR_CCOMP, Imgproc.CHAIN_APPROX_SIMPLE);
for (int i = 0 ; i < contours.size() ; i++)
{
int contourSize = (int)contours.get(i).total();
MatOfPoint2f curContour2f = new MatOfPoint2f(contours.get(i).toArray());
Imgproc.approxPolyDP(curContour2f, curContour2f, 0.04 * Imgproc.arcLength(curContour2f, true), true);
contours.set(i, new MatOfPoint(curContour2f.toArray()));
Imgproc.drawContours(mask, contours, i, new Scalar(0, 255, 0), 3);
}
Average input:
Desired result (it's not a rectangle, corners do not have to be 90 degrees, but here must be 4 of them):
Average current output:
Other output example: the input picture here was more detailed (with some gaps), so the output is much worse depending on what I want it to be. Polygons in other polygons is not a problem, but the main shape of a whole block has to much corners:
Thank you in advance.

Image median threshold in java

i am doing the median threshold. The concept is i will insert an color image, then i will use the array.sort to get the median value of each rgb value. then write the new image based on the median value.
Here is the code.
public class medianthreshold{
public static void main(String[] a)throws Throwable{
File f=new File("C:\\Users\\asd\\Documents\\NetBeansProjects\\JavaApplication34\\images.jpg"); //Input Photo File
Color[] pixel=new Color[9];
int[] R=new int[9];
int[] B=new int[9];
int[] G=new int[9];
File output=new File("C:\\Users\\asd\\Documents\\NetBeansProjects\\JavaApplication34\\outputmedian.jpg");
img.setRGB(i,j,new Color(R[4],B[4],G[4]).getRGB());
}
ImageIO.write(img,"jpg",output);
}
}
And i would like to enhance it,make it become black and white image,by putting a condition, if the pixel value of each rgb is less than median, then the value should be 0 (represent white color), else the value become 0xFFFFFF (represent black color). 225x225 is the image width and height.
The problem im facing now is i duno which part i should put to make sure every pixel value will change to 0 or 0xFFFFF in each R[4],G[4],B[4] which represent the median value of rgb. The output image should have 0 and 1 of pixel value which also is a white and black image.
You said median, but you computed the mean. It's ok, it's just an other ways to do an adaptive global thresholding.
In order to make it faster, you should simply use the histogram of the image. It will be faster and simpler to compute the mean or the median value. You can also use the DataBuffer instead of getRGB. Moreover, I strongly advise to use TYPE_3BYTE_BGR images instead of TYPE_INT_RGB.
For the mean, you have code source example in ImageJ.
0/1, 0/0xFF or 0/0xFFFF just depends on the image encoding, so respectively TYPE_BINARY, TYPE_BYTE_GRAY or TYPE_USHORT_GRAY when you work with the BufferedImage.

How to get a ByteBuffer that will work with LWJGL?

I'm trying to call the glTexImage2D function using the OpenGL library. I'm using the LWJGL as the framework to use OpenGL in Java.
According to the documentation, this method accepts the following parameters:
public static void glTexImage2D(int target,
int level,
int internalformat,
int width,
int height,
int border,
int format,
int type,
java.nio.ByteBuffer pixels)
My implementation of this is below.
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGB, 1092, 1092, 0, GL11.GL_RGB, GL11.GL_INT, imageData);
However, I am getting an error:
Exception in thread "main" java.lang.IllegalArgumentException: Number of remaining buffer elements is 3577392, must be at least 14309568. Because at most 14309568 elements can be returned, a buffer with at least 14309568 elements is required, regardless of actual returned element count
at org.lwjgl.BufferChecks.throwBufferSizeException(BufferChecks.java:162)
at org.lwjgl.BufferChecks.checkBufferSize(BufferChecks.java:189)
at org.lwjgl.BufferChecks.checkBuffer(BufferChecks.java:230)
at org.lwjgl.opengl.GL11.glTexImage2D(GL11.java:2855)
at TextureLab.testTexture(TextureLab.java:100)
at TextureLab.start(TextureLab.java:39)
at TextureLab.main(TextureLab.java:20)
I've done allot of querying, and I assume my method of creating a ByteBuffer for the last parameter is what is causing the issue.
My code implementation for getting a ByteBuffer form an is as follows:
img = ImageIO.read(file);
byte[] pixels = ((DataBufferByte) img.getRaster().getDataBuffer()).getData();
ByteBuffer buffer = BufferUtils.createByteBuffer(pixels.length);
buffer.put(pixels);
buffer.flip();
buffer.rewind();
I've substituted the length of the buffer for the width*height*4 and even hardcoded to the number contained in the error, all to no luck. Any ideas what I'm doing wrong? I think the issue is in my ByteBuffer, but even that I'm not sure of.
The lwjgl layer is telling you that your buffer should be at least 14309568 bytes big, but you provide only 3577392. The reason for this is that you used GL_INT as the format parameter of the glTexImage2D call, so each pixel is assumed to be represented by 3 4-byte integer values bz the GL.
You just want to use GL_UNSIGNED_BYTE for typical 8 bit per channel image content, which exaclty maps to the 3577392 bytes you are currently providing.

Translate image instead of subimage copy

Imagine you have an large picture, let's say about 10000px x 3000px and you like to translate it left on the x axis and in a very efficient way. So no AffineTransform or smth like that is wanted. If possible, the part moved out on the left side should be appended on the right side, so a kind of turn around would be very cool.
What you have on hand are: Java 7, OpenCV.
Do you have any suggestions?
Here you can see how it can be done with OpenCV in C++. You just need to translate it to Java:
// C++:
Mat outImg(inputImg.size(),inputImg.type());
inputImg(Rect(0, 0, shiftX, height)).copyTo(outImg(Rect(width-shiftX, 0, shiftX, height)));
Becomes something like:
Mat outImg = new Mat(inputImg.size(),inputImg.type());
inputImg.submat(new Rect(0, 0, shiftX, height)).copyTo(outImg.submat(new Rect(width-shiftX, 0, shiftX, height)));
Although this one liner is not very readable ;)

Graphics2D transformation result does not match manual transformation

I am using Java's Graphics2D to draw on a component using AffineTransform's to manipulate my drawing.
Graphics2D offers an method transform for this, which takes an AffineTransform.
Sometimes I need to manipulate a point manually without using the builtin-transformation.
But when I try to transform a point using the same transformation I gave to Graphics2D.transform sometimes the resulting point is not the same.
The following code reproduces the problem (It's Scala code, but I think you can imagine the Java code.):
var transformationMatrix = new AffineTransform()
/*
* transformationMatrix is modified throughout the program
* ...
*/
override def paintComponent(g: Graphics2D) = {
super.paintComponent(g)
/* 1. transform using graphics transform */
g.transform(transformationMatrix)
g.setColor(Color.RED)
g.fill(new Rectangle(0, 0, 1, 1))
/* 2. transform point manually */
g.setTransform(new AffineTransform) // reset transformation to standard
val p0 = new Point(0, 0)
val pDest = new Point()
transformationMatrix.transform(p0, pDest)
g.setColor(Color.BLUE)
g.fill(new Rectangle(pDest.x, pDest.y, 1, 1)
}
Expected behaviour
The blue rectangle (manually calculated) overdraws the red one (calculated by transform).
Experienced behaviour
I admit that my transformationMatrix is not really integer, but that should'nt be the problem, should it?
affineTransform = 1.1, 0.0, 520.55
0.0, 1.1, 182.54999999999995
0.0, 0.0, 1.0
Is this a bug or am I missing some deep insight?
Edit: You can reproduce the bug, if you set transformationMatrix to
transformationMatrix = new AffineTransform(1.1, 0.0, 0.0, 1.1, 521.55, 183.54999999999995)
at the beginning of paintComponent. Please note, that g is of type Graphics2D.
Your transform is basically just a translation by (520.55, 182.55). And because it has fractional pixel values it is in fact sensitive to choice of roundoff. If you have anti-aliasing on, you'll actually get a 4-pixel red blob covering the pixels that are overlapped. Otherwise, the behavior (disagreement) you're seeing is reasonable given the ambiguity between rounding to integer and truncating to integer.
Well, you are doing two different things.
In (1) you are subjecting a shape (and it is irrelevant that it is Rectangle and not Rectangle2D.Double) to a transform that yields fractional coordinates. It only is painted aliased, because you haven't set specific rendering hints (RenderingHints.KEY_ANTIALIASING -> RenderingHints.VALUE_ANTIALIAS_ON, and RenderingHints.KEY_STROKE_CONTROL -> RenderingHints.VALUE_STROKE_PURE).
In (2) you are subjecting a point to the transform, and coerce it into aliased coordinates (Point instead of Point2D.Double). Then successively construct a rectangle from that point.
Clearly there may be very different things happening under the hood, and I wouldn't expect at all that transforming into an integer point versus painting floating point shapes in an aliasing graphics context yield the same results.
(Without testing) I would guess that a valid equivalent statement for (1) would be
g.fill(transformationMatrix.createTransformedShape(new Rectangle(0, 0, 1, 1)))
When you are performing the first step, g.transform(transformationMatrix), the Graphics composes that with the already present transformations. On the second step you are overrinding it with, g.setTransform(new AffineTransform), thus losing the previous transformation if any. You are assuming you are back to the start but it might not be true.
Make a getTransform() before step 1 and another after step 2 to verify those are the same.
Whenever you work with floating point coordinates, you should use the '2D' version of graphical objects if you want correct results. I didn't read that from 'book', so I can't quote, it is just experience with it.
Here is my ugly java code that produces result that you are expecting.
AffineTransform transformationMatrix = AffineTransform.getTranslateInstance(520.55, 182.54999999999995);
transformationMatrix.scale(1.1, 1.1);
((Graphics2D) previewGraphics).transform(transformationMatrix);
previewGraphics.setColor(Color.RED);
((Graphics2D) previewGraphics).fill(new Rectangle(0,0,1,1));
((Graphics2D) previewGraphics).setTransform(new AffineTransform());
Point2D p0 = new Point2D.Double(0, 0);
Point2D pDest = new Point2D.Double();
transformationMatrix.transform(p0, pDest);
previewGraphics.setColor(Color.BLUE);
((Graphics2D) previewGraphics).fill((Shape) new Rectangle2D.Double(pDest.getX(), pDest.getY(), 1, 1));

Categories

Resources